I have two transaction: T1 with SERIALIZABLE isolation level and T2 (I think - with default READ COMMITTED isolation level, but it doesn't matter).
Transaction T1 performs SELECT then WAITFOR 2 seconds then SELECT.
Transaction T2 performs UPDATE on data which T1 read.
It causes deadlock, why transaction T2 don't wait for end of T1?
When T1 has REPEATABLE READ isolation level everything is OK i.e. phantom rows occur.
I thought when I raise isolation level up to SERIALIZABLE, T2 will wait for end of T1.
This is a part of my college exercise. I have to show negative effects in two parallel transactions which have incorrect isolation level and absence of these effects with correct isolation level.
Here's the code, unfortunately names of fields are in Polish.
T1:
USE MR;
SET IMPLICIT_TRANSACTIONS OFF;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION;
-- 1. zapytanie
SELECT
www.IdSamochodu, s.Model, s.Marka, s.NrRejestracyjny, o.PESEL, o.Nazwisko, o.Imie, o.NrTelefonu
FROM
WizytyWWarsztacie www
JOIN
Samochody s
ON s.IdSamochodu = www.IdSamochodu
JOIN
Osoby o
ON o.PESEL = s.PESEL
WHERE
www.[Status] = 'gotowy_do_odbioru'
ORDER BY www.IdSamochodu ASC
;
WAITFOR DELAY '00:00:02';
-- 2. zapytanie
SELECT
u.IdSamochodu, tu.Nazwa, tu.Opis, u.Oplata
FROM
Uslugi u
JOIN
TypyUslug tu
ON tu.IdTypuUslugi = u.IdTypuUslugi
JOIN
WizytyWWarsztacie www
ON www.IdSamochodu = u.IdSamochodu AND
www.DataOd = u.DataOd
WHERE
www.[Status] = 'gotowy_do_odbioru'
ORDER BY u.IdSamochodu ASC, u.Oplata DESC
;
COMMIT;
T2:
USE MR;
SET IMPLICIT_TRANSACTIONS OFF;
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
BEGIN TRANSACTION;
UPDATE
Uslugi
SET
[Status] = 'wykonano'
WHERE
IdUslugi = 2
;
UPDATE
www
SET
www.[Status] = 'gotowy_do_odbioru'
FROM
WizytyWWarsztacie www
WHERE
www.[Status] = 'wykonywanie_usług' AND
EXISTS (
SELECT 1
FROM
Uslugi u
WHERE
u.IdSamochodu = www.IdSamochodu AND
u.DataOd = www.DataOd AND
u.[Status] = 'wykonano'
GROUP BY u.IdSamochodu, u.DataOd
HAVING COUNT(u.IdUslugi) = (
SELECT
COUNT(u2.IdUslugi)
FROM
Uslugi u2
WHERE
u2.IdSamochodu = www.IdSamochodu AND
u2.DataOd = www.DataOd
GROUP BY u2.IdSamochodu, u2.DataOd
)
)
;
COMMIT;
I use SQL Management Studio and I have each transaction in different file. I run this by clicking F5 in T1 then quickly switch to file which contains T2 and again - F5.
I read about deadlocks and locking mechanism in mssql but apparently, I haven't understand this topic yet.
Deadlock issue in SQL Server 2008 R2 (.Net 2.0 Application)
SQL Server deadlocks between select/update or multiple selects
Deadlock on SELECT/UPDATE
http://msdn.microsoft.com/en-us/library/ms173763(v=sql.105).aspx
http://www.sql-server-performance.com/2004/advanced-sql-locking/
edit
I figure out the first UPDATE statement in T2 causes the problem, why?
Troubleshooting deadlocks starts with obtaining the deadlock graph. This is an xml document that tells you the relevant bits about the transactions and resources involved. You can get it through Profiler, extended events, or event notifications (I'm sure that there are other methods, but this will do for now). Once you have the graph, examine it to see what each transaction had what type of locks on what resources. Where you go from there really depends on what's going on in the graph so I'll stop there. Bottom line: obtain the deadlock graph and mine it for details.
As an aside, to say that one or the other transaction is "causing" the deadlock is somewhat misleading. All transactions involved in the deadlock were necessary to cause the deadlock situation so neither is more at fault.
I had some problems with my SQL Managmenet Studio (Profiler didn't work) but finally I've obtained deadlock graph. This article was helpful for me.
To understand this graph I had to learn about locking mechanism and symbols.
I think here it is explained quite clearly.
Now, when I know about all these stuff, the cause of deadlock is quite obvious.
I've made sequence diagram for the described situation:
As I wrote earlier, when we get rid of the first UPDATE statement from transaction T2, deadlock does not occur.
In this situation T2 does not acquire a lock on the pk_uslugi index, thus second SELECT statement from transaction T1 will execute successfully and index pk_wizytywwarsztacie will be unlocked. After that, also T2 will be finished.
The problem could be this:
The T1 Select S-locks the row
The T2 Update U-locks the row (succeeds)
The T2 Update X-locks the row (waits, lock is queued)
T2 tries to S-lock again, but the S-lock is incompatible with the queued X-lock.
Locks in SQL Server are queued. If the head of the queue waits, everything else behind it also waits.
Actually, I'm not entirely sure that this is the cause because the same problem should occur with REPEATABLE READ. I'm still posting this idea hoping that it helps.
I ran into a similar issue where I was selecting from a list of available items and then inserting those items into a holding queue table. When I had too many concurrent requests, the select statement would return items that were also concurrently selected during another parallel request. When attempting to insert them into the holding queue table, I would receive a Unique Constraint error (because the same item couldn't go into the holding table twice).
I then tried wrapping a SERIALIZABLE transaction around the whole thing but then I ran into DEADLOCK errors because both transactions were holding onto a lock on the UC index (determined by my Deadlock graph).
I was finally able to resolve the issue by using an exclusive Row lock within the select statement.
You could try using an exclusive row lock on the table/rows in question. This would ensure that the lock on the rows within T1 will complete before T2 attempts to update the same rows.
EXAMPLE:
SELECT *
FROM Uslugi u WITH (XLOCK, ROWLOCK)
I'm not sure yet of the performance impact of this but while running load testing using multiple threads, it doesn't appear to have a negative impact.
Related
We're considering turning on read committed snapshot isolation level to help with some of our table locking problems.
We have a few stored procedures that use the table hint WITH (ROWLOCK, READPAST) as a queuing based table setup. This prevents multiple worker roles from reading the same row.
I'm a little concerned that the read committed snapshot could potentially break the queuing system and more than one worker role could read the same row. Does anyone know if turning on RCS isolation would break this process?
WITH q AS
(
SELECT TOP Column1
FROM Table1 c WITH (ROWLOCK, READPAST)
WHERE c.NextAttemptDate <= GETUTCDATE()
ORDER BY c.NextAttemptDate ASC
)
UPDATE q
SET OperationStatusType = 8
OUTPUT inserted.Column1
An UPDATE takes U-locks on rows of the target table while it checks whether they match your filter or not. After that the lock is either released or converted to an X-lock further down the query pipeline.
It is not possible to affect write locking using locking hints or the isolation level. You are safe. U-locks will be taken.
I'm in the habit of always adding WITH (UPDLOCK) redundantly as well so that this behavior is documented in the code.
Given an UPDATE execution which takes 5 minutes or so, what happens when SELECT tries to retrieve data from the same table? For different Transaction Isolation Levels and SELECT WITH (NOLOCK), does SELECT wait for UPDATE? If not, does SELECT return old data (data before the UPDATE) or part of the currently inserted records (such as 50% of the records currently being inserted) ?
If found the following question, but it only describes what happens when you execute and UPDATE during a long SELECT.
SQL Server - does [SELECT] lock [UPDATE]?
I am using MS SQL Server 2012. Hopefully, this behaviour is consistent for different implementations.
This post by Gavin Draper explains it quite well and contains some example query's.
SQL Server Isolation Levels By Example
Isolation levels in SQL Server control the way locking works between
transactions.
SQL Server 2008 supports the following isolation levels
Read Uncommitted
Read Committed (The default)
Repeatable Read
Serializable
Snapshot
Before I run through each of these in detail you may want to create a
new database to run the examples, run the following script on the new
database to create the sample data. Note : You’ll also want to drop
the IsolationTests table and re-run this script before each example to
reset the data.
CREATE TABLE IsolationTests
(
Id INT IDENTITY,
Col1 INT,
Col2 INT,
Col3 INTupdate te
)
INSERT INTO IsolationTests(Col1,Col2,Col3)
SELECT 1,2,3
UNION ALL SELECT 1,2,3
UNION ALL SELECT 1,2,3
UNION ALL SELECT 1,2,3
UNION ALL SELECT 1,2,3
UNION ALL SELECT 1,2,3
UNION ALL SELECT 1,2,3
Also before we go any further it is important to understand these two
terms….
Dirty Reads – This is when you read uncommitted data, when doing this there is no guarantee that data read will ever be committed
meaning the data could well be bad.
Phantom Reads – This is when data that you are working with has been changed by another transaction since you first read it in.
This
means subsequent reads of this data in the same transaction could
well be different.
Read Uncommitted
This is the lowest isolation level there is. Read uncommitted causes
no shared locks to be requested which allows you to read data that is
currently being modified in other transactions. It also allows other
transactions to modify data that you are reading.
As you can probably imagine this can cause some unexpected results in
a variety of different ways. For example data returned by the select
could be in a half way state if an update was running in another
transaction causing some of your rows to come back with the updated
values and some not to.
To see read uncommitted in action lets run Query1 in one tab of
Management Studio and then quickly run Query2 in another tab before
Query1 completes.
Query1
BEGIN TRAN
UPDATE IsolationTests SET Col1 = 2
--Simulate having some intensive processing here with a wait
WAITFOR DELAY '00:00:10'
ROLLBACK
Query2
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT * FROM IsolationTests
Notice that Query2 will not wait for Query1 to finish, also more
importantly Query2 returns dirty data. Remember Query1 rolls back all
its changes however Query2 has returned the data anyway, this is
because it didn't wait for all the other transactions with exclusive
locks on this data it just returned what was there at the time.
There is a syntactic shortcut for querying data using the read
uncommitted isolation level by using the NOLOCK table hint. You
could change the above Query2 to look like this and it would do the
exact same thing.
SELECT * FROM IsolationTests WITH(NOLOCK)
Read Committed
This is the default isolation level and means selects will only return
committed data. Select statements will issue shared lock requests
against data you’re querying this causes you to wait if another
transaction already has an exclusive lock on that data. Once you have
your shared lock any other transactions trying to modify that data
will request an exclusive lock and be made to wait until your Read
Committed transaction finishes.
You can see an example of a read transaction waiting for a modify
transaction to complete before returning the data by running the
following Queries in separate tabs as you did with Read Uncommitted.
Query1
BEGIN TRAN
UPDATE Tests SET Col1 = 2
--Simulate having some intensive processing here with a wait
WAITFOR DELAY '00:00:10'
ROLLBACK
Query2
SELECT * FROM IsolationTests
Notice how Query2 waited for the first transaction to complete before
returning and also how the data returned is the data we started off
with as Query1 did a rollback. The reason no isolation level was
specified is because Read Committed is the default isolation level for
SQL Server. If you want to check what isolation level you are running
under you can run DBCC useroptions. Remember isolation levels are
Connection/Transaction specific so different queries on the same
database are often run under different isolation levels.
Repeatable Read
This is similar to Read Committed but with the additional guarantee
that if you issue the same select twice in a transaction you will get
the same results both times. It does this by holding on to the shared
locks it obtains on the records it reads until the end of the
transaction, This means any transactions that try to modify these
records are forced to wait for the read transaction to complete.
As before run Query1 then while its running run Query2
Query1
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
BEGIN TRAN
SELECT * FROM IsolationTests
WAITFOR DELAY '00:00:10'
SELECT * FROM IsolationTests
ROLLBACK
Query2
UPDATE IsolationTests SET Col1 = -1
Notice that Query1 returns the same data for both selects even though
you ran a query to modify the data before the second select ran. This
is because the Update query was forced to wait for Query1 to finish
due to the exclusive locks that were opened as you specified
Repeatable Read.
If you rerun the above Queries but change Query1 to Read Committed you
will notice the two selects return different data and that Query2 does
not wait for Query1 to finish.
One last thing to know about Repeatable Read is that the data can
change between 2 queries if more records are added. Repeatable Read
guarantees records queried by a previous select will not be changed or
deleted, it does not stop new records being inserted so it is still
very possible to get Phantom Reads at this isolation level.
Serializable
This isolation level takes Repeatable Read and adds the guarantee that
no new data will be added eradicating the chance of getting Phantom
Reads. It does this by placing range locks on the queried data. This
causes any other transactions trying to modify or insert data touched
on by this transaction to wait until it has finished.
You know the drill by now run these queries side by side…
Query1
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRAN
SELECT * FROM IsolationTests
WAITFOR DELAY '00:00:10'
SELECT * FROM IsolationTests
ROLLBACK
Query2
INSERT INTO IsolationTests(Col1,Col2,Col3)
VALUES (100,100,100)
You’ll see that the insert in Query2 waits for Query1 to complete
before it runs eradicating the chance of a phantom read. If you change
the isolation level in Query1 to repeatable read, you’ll see the
insert no longer gets blocked and the two select statements in Query1
return a different amount of rows.
Snapshot
This provides the same guarantees as serializable. So what's the
difference? Well it’s more in the way it works, using snapshot doesn't
block other queries from inserting or updating the data touched by the
snapshot transaction. Instead row versioning is used so when data is
changed the old version is kept in tempdb so existing transactions
will see the version without the change. When all transactions that
started before the changes are complete the previous row version is
removed from tempdb. This means that even if another transaction has
made changes you will always get the same results as you did the first
time in that transaction.
So on the plus side your not blocking anyone else from modifying the
data whilst you run your transaction but…. You’re using extra
resources on the SQL Server to hold multiple versions of your changes.
To use the snapshot isolation level you need to enable it on the
database by running the following command
ALTER DATABASE IsolationTests
SET ALLOW_SNAPSHOT_ISOLATION ON
If you rerun the examples from serializable but change the isolation
level to snapshot you will notice that you still get the same data
returned but Query2 no longer waits for Query1 to complete.
Summary
You should now have a good idea how each of the different isolation
levels work. You can see how the higher the level you use the less
concurrency you are offering and the more blocking you bring to the
table. You should always try to use the lowest isolation level you can
which is usually read committed.
READ UNCOMMITTED: The SELECT can read all kinds of nasty inconsistencies. Old rows, new rows, duplicate rows, missing rows. It can also totally error out with the famous "data movement" error.
READ COMMITTED: Will block without snapshot isolation. Will return the old state with snapshot isolation in perfect consistency.
REPEATABLE READ/SERIALIZABLE: Will block.
SNAPSHOT: Will return the old state with snapshot isolation in perfect consistency.
It sounds like you should read a few concurrency tutorials. I have written these brief facts to get you started. To really understand what's going on to the point that you can make predictions (that come true) you need to go deeper than an answer on Stack Overflow can provide.
Most of the time, you want to use SNAPSHOT for read-only transactions. It takes away all concurrency concerns. Be aware that it has a few drawbacks.
I have a complex Query (Without any locking hints) that takes data from many tables say Table1,Table2,Table3
Below is the code the code to retrieve data (there are no transactions)
IDbCommand sqlCmd = dbHelper.CreateCommand(Helper.MyConnString, sbSQL.ToString(), CommandType.Text, arParms);
sqlCmd.CommandTimeout = 300;
ds = dbHelper.ExecuteDataset(sqlCmd);
In an application this query runs every 2 minutes
When i fire a simple update query say
Update Table1 set Col1='abc' where ID=100
(where ID is int and primary key + clustered index)
the update query gets delayed and many times it is timeout
Below is the log
How can i fix this.
You could execute your query inside a transaction that has an isolation level of SNAPSHOT. That way, your query won't acquire any (shared) locks and your UPDATE doesn't have to wait for the exclusive lock (given that the source of the locks on the table that block your UPDATE is really your query that the application runs every two minutes...)
For reference, have a look at Working with Snapshot Isolation and SET TRANSACTION ISOLATION LEVEL on MSDN.
EDIT due to comment:
First, turn on ALLOW_SNAPSHOT_ISOLATION for your database:
ALTER DATABASE YourDB SET ALLOW_SNAPSHOT_ISOLATION ON
Then, write your query as follows:
SET TRANSACTION ISOLATION LEVEL SNAPSHOT;
SELECT Column1, Columns2, ...
FROM Table1
LEFT JOIN Table2
ON Table1.Columns45 = Table2.Column3
[... your complex query ...]
Is that enough for an example?
If you don't want wait reading queries while modifing data, it would be best to use READ_COMMITTED_SNAPSHOT.
It can be transparently on, and don't affect you app code and has no side effect.
SNAPSHOT has many side effects, for example, while you don't have locks on data modifications, you can have conflicting data problems on commiting, this problems very dificult to deal with.
I am using SSIS 2008 to execute multiple stored procedures in parallel in control flow.
Each SP is supposed to ultimately update 1 row in a table.
The point to note is that each SP has a defined responsibility to update specific columns only.
It is guaranteed that the different SPs will not update each other's columns. So the columns to be updated are divided between the various SPs, but as per design, each SP is supposed to work on the same row ultimately.
At the moment some of my SPs error out due to deadlock. I am guessing this may be because of the lock on that row by other SPs?
How can i work this out?
You have to admit, this seems like a highly unusual thing to do. I wonder if it wouldn't be better to update separate tables, and then have a single update statement at the end that would join the individual tables to the final one? (i.e. update a set a.[1] = ... from a inner join b inner join c etc.).
But, if you want to continue down this path, then just set READ UNCOMMITTED from within each of your stored procedures. That is the best bet.
The deadlocks are probably caused by more than just having another SP locking the row. Under that situation the first procedure would just wait until the locking SP releases the lock. That's not to say that your multiple procedures are not causing the problem. There's more to it though.
You may have to do some rework to avoid the problem, but first you should find out more about the deadlock situation. I suspect that you have locks on objects other than the row that's being updated.
There are ways to gather more information on the deadlock. Here's a link where you can learn about the deadlock details.
Simple: make suere you do not leave locks anywhere. This works with the transaction isolation determined by the connection. With theproper transaction isaolation, there will be no locks, so no deadlocks.
The update is not the problem. It starts with READS. Go to READ UNCOMMITED to make sure you do not leave locks upon read, and / or use the NOLOCK option in the SELECT statements to individually force them to leave NO LOCKS in place wile reading data (more advisable).
If you then make sure that the SP will commit pretty much immediately after the insert (internally or externally) there should be no deadlock possible - a write lock will result in the next SP waiting for the first Transaction to commit.
Especially when going down to one table / one row only, a deadlock is not possible with update statements only, IIRC. It is the read locks which turn a delaying lock (albeit small delay) into a deadlock.
http://www.eggheadcafe.com/software/aspnet/30976898/how-does-update-lock-prevent-deadlocks.aspx has some nice example of a deadlock. So, if everything BUT the update has no locks, then at the end a deadlock is not possible.
Row level locking is as low as you can safely go, so I think you will have to break up the updated table into several tables that you can update independent of each other.
No, two sessions cannot update the same row simultaneously. Row level locking is as low as it gets so when one session is updating a row, other sessions wanting to update that record will wait.
As for the deadlocks, the trouble with SQL Server is that by default SELECT statements will block if it's asking for a record currently being updated. You can use WITH (NOLOCK) if you don't mind reading uncommitted data.
So, if you've got this order of events:
SessionA
begin transaction
update t1
set c1 = 'x'
where c2 = 5
SessionB
begin transaction
update t2
set c1 = 'y'
where c2 = 7
SessionA
select * from t1 where c2 = 5
<waits on SessionB>
SessionB
select * from t2 where c2 = 7
<waits on SessionA>...oops. Deadlock.
The trick is to only lock something for as little time as is necessary (don't break up a series of statements just to release a lock - make sure that steps that make up a logical transaction stay as a transaction):
begin transaction
update t1
set c1 = 'x'
where c2 = 5
commit
Or (caveat emptor) use the nolock directive:
SessionA
select c1 from t1 where c2 = 5 with (nolock)
<gets the new value of 'x'>
SessionB
select * from t2 where c2 = 7 with (nolock)
<gets the new value of 'y'>
I'm writing a high volume trading system. We receive messages at around 300-500 per second and these messages then need to be saved to the database as quickly as possible. These messages get deposited on a Message Queue and are then read from there.
I've implemented a Competing Consumer pattern, which reads from the queue and allows for multithreaded processing of the messages. However I'm getting a frequent primary key violation while the app is running.
We're running SQL 2008. The sample table structure would be:
TableA
{
MessageSequence INT PRIMARY KEY,
Data VARCHAR(50)
}
A stored procedure gets invoked to persist this message and looks something like this:
BEGIN TRANSACTION
INSERT TableA(MessageSequence, Data )
SELECT #MessageSequence, #Data
WHERE NOT EXISTS
(
SELECT TOP 1 MessageSequence FROM TableA WHERE MessageSequence = #MessageSequence
)
IF (##ROWCOUNT = 0)
BEGIN
UPDATE TableA
SET Data = #Data
WHERE MessageSequence = #MessageSequence
END
COMMIT TRANSACTION
All of this is in a TRY...CATCH block so if there's an error, it rolls back the transaction.
I've tried using table hints, like ROWLOCK, but it hasn't made a difference. Since the Insert is evaluated as a single statement, it seems ludicrous that I'm still getting a 'Primary Key on insert' issue.
Does anyone have an idea why this is happening? And have you got ANY ideas which may point me in the direction of a solution?
Why is this happening?
SELECT TOP 1 MessageSequence FROM TableA WHERE MessageSequence = #MessageSequence
This SELECT will try to locate the row, if not found the EXISTS operator will return FALSE and the INSERT will proceed. Hoewever, the decision to INSERT is based on a state that was true at the time of the SELECT, but that is no longer guaranteed to be true at the time of the INSERT. In other words, you have race conditions where two threads can both look up the same #MessageSequence, both return NOT EXISTS and both try to INSERT, when only the first one will succeed, second one will cause a PK violation.
How do I solve it?
The quickest fix is to add a WITH (UPDLOCK) hint to the SELECT, this will enforce the lock placed on the #MessageSequence key to be retained and thus the INSERT/SELECT to behave atomically:
INSERT TableA(MessageSequence, Data )
SELECT #MessageSequence, #Data
WHERE NOT EXISTS (
SELECT TOP 1 MessageSequence FROM TableA WITH(UPDLOCK) WHERE MessageSequence = #MessageSequence)
To prevent SQL from doing fancy stuff like page lock, you can also add the ROWLOCK hint.
However, that is not my recommendation. My recommendation may surpise you, but is this: do the operation that is most likely to succeed and handle the error if it failed. Ie. if your business case makes it more likely for the #MessageSequnce to be new, try an INSERT and handle the PK if it failed. This way you avoid the spurious look-ups, and hte cost of the catch/retry is amortized over the many cases when it succeeds from the first try.
Also, it is perhaps worth investigating using the built-in queues that come with SQL Server.
Common problem. Explained here:
Defensive database programming: eliminating IF statements
It might be related to the transaction isolation level. You might need
SET TRANSACTION ISOLATION LEVEL READ COMMITTED
before you start the transaction.
Also, if you have more updates than inserts, you should try the update first and check rowcount and do the insert second.
This is very similar to post 939831. Ultimately you want to use the hints (ROWLOCK, READPAST, UPDLOCK). READPAST tells sql server to skip to the next record if the current one is locked. UPDLOCK tells sql server that the read lock is going to escalate to an update lock.
When I implemented something similar I locked the next record by the threadID
UPDATE TOP (1)
foo
SET
ProcessorID = #PROCID
FROM
OrderTable foo WITH (ROWLOCK, READPAST, UPDLOCK)
WHERE
ProcessorID = 0
Then selected the record
SELECT *
FROM foo WITH (NOLOCK)
WHERE ProcessorID = #PROCID
Then marked it as processed
UPDATE foo
SET ProcessorID = -1
WHERE ProcessorID = #PROCID
Later in off hours I perform the relatively expensive operation of performing the delete operation to clear the queue of processed records.
The atomicity of the following statement is what you are after:
INSERT TableA(MessageSequence, Data )
SELECT #MessageSequence, #Data
WHERE NOT EXISTS
(
SELECT TOP 1 MessageSequence FROM TableA WHERE MessageSequence = #MessageSequence
)
According to this person, it depends on the current isolation level.
On a tangent, if you're thinking of a high volume trading system you might want to consider a tick database designed for such data [I'm not exactly sure what "message" you are storing here], such as discussed in this thread for example: http://www.elitetrader.com/vb/showthread.php?threadid=81345.
These are typically in-memory solutions with proprietary query languages. We use kdb+ at our shop.
Not sure what Messaging product you use - but it may be worth looking at the transactions not at the DB level, but at the MQ Level.
Of course, if you are using a TM (Transaction manager), the two operations : 1)Get from MQ and 2)Write to DB are both 'bracketed' under the same parent commit.
So I am not sure if you are using an implicit or explicit or any TM here (for example, Microsoft's DTC).
MessageSequence is the PK, so could the same Message from the MQ be getting processed twice.
When you perform a 'GET" from MQ, make sure the GET is committed (i.e. not a db-commit, but a MQ-commit) - that will ensure the same MessageID cannot be 'popped' by the next thread that writes messages to the DB.