SQL Server 2000 Exclusive Locking - locking

My setup - One server with two SQL Server 2000 instances, INSTANCE1 and INSTANCE2. Each instance has 1 DB, DBprod and DBstag.
I have a situation where I need to prepare invoices for several customers so I would like to place an exclusive lock on a table while I fetch an invoice number from INSTANCE1.DBprod.LastInvoiceNumber to INSTANCE2.DBstag, perform soem calculation, prepare an invoice and then insert the invoice (header and detail) to INSTANCE1.DBprod, then update INSTANCE1.DBprod.LastInvoiceNumber, repeat for the next customer and then release the lock after I am finished with all the customers.
begin trans inv
EXCLUSIVELY LOCK
INSTANCE1.DBprod.LastInvoiceNumber
open customer cursor
fetch next from customer
get invoice number from
INSTANCE1.DBprod.LastInvoiceNumber
prepare invoice
insert invoice to INSTANCE1.DBprod
update
INSTANCE1.DBprod.LastInvoiceNumber
(increment by 1)
fetch next from customer (prepare next
customer invoice)
close customer cursor
commit trans inv
RELEASE LOCK ON
INSTANCE1.DBprod.LastInvoiceNumber
Would this be my solution
SET TRANSACTION ISOLATION LEVEL
SERIALIZABLE
There is an accounting application using INSTANCE1.DBprod.LastInvoiceNumber which is why I want to exclusively lock the table until I am finished posting all my invoices.

One of the easiest ways is use sp_getapplock to allow only one session. Other sessions will wait/fail
This is independent of lock granularity and isolation and it often better in this scenario. Using SERIALIZABLE isn't exclusive : you'd need TABLOCKX. But then other readers on the table are blocked too
sp_getapplock will apply only to the scope of this code.

Related

Row locks on select

I have a stored procedure which does the following:
selects top N from table
sets these rows as processed
returns these rows to the client
Here is roughly how I am doing it in Sybase ASE:
set rowcount #count
begin tran get_items
insert into #temp_table
select item
from available_item
where is_processed = 0
update available_item
set is_processed = 1
from available_item, #temp_table
where available_item.item = #temp_table.item
# select the processed items...
commit trans
I am wondering whether there is a race condition here. If two separate processes execute this stored procedure at the same time, could they select and mark processed the same data? Or does having it in a transaction stop this?
If not, is there a way to hold locks on selected rows?
Some of the details will depend on your tables locking scheme. Allpages, pages and row level locking will have different impacts on your ability to run concurrent updates on a single table. I am assuming a page/row level scheme to allow for concurrency.
Your query will grab an initial shared page/row lock, which will be upgraded to an update lock, which will then be followed by an exclusive row lock on the updated pages/rows. No other processes will be able to make changes to the selected pages/rows for the duration of the transaction, but another process could read the selected rows prior to your update, which could lead to some inconsistency.
To get around this possibility, you can specify the isolation level in the transaction to either isolation level 2 (repeatable reads), or isolation level 3 (serialization). You may want to read up on the specifics of each level to decide which you want to enforce, and the trade-offs associate with it.
In your transaction, you would use it like this:
set rowcount #count
set transaction isolation level 2
...
Something to note, is that depending on the number of records you grab in your query, you could trigger a lock upgrade which could prevent your concurrent transactions from executing, even if they are not looking at the same rows as your first transaction. By default, the server will attempt to escalate to a table lock if it acquires locks on more than 200 pages/rows. This can be changed either to an explicit value or a range of values and percentage, and is configurable at the server, database or table level.
Relevant Documentation:
Transaction: Maintaining Data Consistency and Recovery
Performance and Tuning Series: Locking and Concurrency Control
Transact-SQL Users Guide 15.7 > Transactions: Maintaining Data Consistency and Recovery

What kind of lock is placed for SELECT statement within a transaction in SQL Server

I believe that each SELECT statement in SQL Server will cause either a Shared or Key lock to be placed. But will it place that same type of lock when it is in a transaction? Will Shared or Key locks allow other processes to read the same records?
For example I have the following logic
Begin Trans
-- select data that is needed for the next 2 statements
SELECT * FROM table1 where id = 1000; -- Assuming this returns 10, 20, 30
insert data that was read from the first query
INSERT INTO table2 (a,b,c) VALUES(10, 20, 30);
-- update table 3 with data found in the first query
UPDATE table3
SET d = 10,
e = 20,
f = 30;
COMMIT;
At this point will my select statement still create a shared or key lock or will it get escalated to exclusive lock? Will other transaction be able to read records from the table1 or will all transaction wait until the my transaction is committed before others are able to select from it?
In an application does it makes since to move the select statement outside of a transaction and just keep the insert/update in one transaction?
A SELECT will always place a shared lock - unless you use the WITH (NOLOCK) hint (then no lock will be placed), use a READ UNCOMMITTED transaction isolation level (same thing), or unless you specifically override it with query hints like WITH (XLOCK) or WITH (UPDLOCK).
A shared lock allows other reading processes to also acquire a shared lock and read the data - but they prevent exclusive locks (for insert, delete, update operations) from being acquired.
In this case, with just three rows selected, there will be no lock escalation (that only happens when more than 5000 locks are being acquired by a single transaction).
Depending on the transaction isolation level, those shared locks will be held for different amounts of times. With READ COMMITTED, the default level, the locks is released immediately after the data has been read, while with REPEATABLE READ or SERIALIZABLE levels, the locks will be held until the transaction is committed or rolled back.

in sql server when deadlock occured, WHICH transaction will be abort

when deadlock occurs in sql server, which of transactions will be aborted. I mean what's the plan of sql server to decide which of transactions should be killed!
consider the two transaction below
Transaction A
begin transaction
update Customers set LastName = 'Kharazmi'
waitfor delay '00:00:5'; -- waits for 5 second
update Orders set OrderId=13
commit transaction
Transaction B
begin transaction
update Orders set OrderId=14
waitfor delay '00:00:5'; -- waits for 5 second
update Customers set LastName = 'EbneSina'
commit transaction
if both transaction are executed at the same time, transaction A locks and updates Customers table whereas transaction B locks and updates Orders table. After a delay of 5 second, transaction A looks for the lock on Orders table which is already held by transaction B and transaction B looks for the lock on Customers table which is already held by transaction A. So both the transactions can not proceed further, the deadlock occurs.
my question is that, when deadlock occurs, which of the both transaction will be aborted.
first, i executes transaction A then transaction B, sql server aborts transaction A and then first executes transaction B then A the result is the same and transaction A is alorted again. it confused me!
thanks for any help.
You can learn the criteria in SET DEADLOCK_PRIORITY (Transact-SQL):
Which session is chosen as the deadlock victim depends on each
session's deadlock priority:
If both sessions have the same deadlock priority, the instance of SQL Server chooses the session that is less expensive to roll back as
the deadlock victim. For example, if both sessions have set their
deadlock priority to HIGH, the instance will choose as a victim the
session it estimates is less costly to roll back.
If the sessions have different deadlock priorities, the session with the lowest deadlock priority is chosen as the deadlock victim.
For your case, A should be considered by DBMS the less expensive transaction to roll back.
There's no way to know just be looking at the queries. Basically, so far as I understand things, the engine considers all transactions involved in the deadlock and tries to work out which one will be the "cheepest" to roll back.
So, if, say, there are 2 rows in your Customers table, and 9000000 rows in your Orders table, it will be considerably cheeper to rollback the UPDATE applied to Customers than the one that was applied to Orders.

waitfor problem in SQL Server

while 1 = 1
begin
waitfor time #timeToRun
begin
/*delete some records from table X*/
end
end
In the codes above, will the SQL server lock the table X during the wait? I would
like to insert records into table X during this wait time. Is it possible?
All write operations acquire X locks on the rows being updated (deleted) and all these X locks will be hold until the transaction commits. Every statement creates an implicit transaction that commits automatically at the end of the statement, if no transaction is explicitly specified.
So the answer to your question depends whether you call this in a context of an existing transaction or not. If not, then (assuming you do not start a transaction in the inner begin... end block and leave the transaction open) then no lock will be held. If the code is run in a the context of an existing transaction (eg. a TransactionScope in the client started automatically by the WCF service behavior) then any lock placed by the delete will be hold while you wait, until the transaction is committed..
Two part question:
1) In the codes above, will the SQL server lock the table X during the wait?
No. It may lock the rows, but not the table.
2) I would like to insert records into table X during this wait time. Is it possible?
Yes, but they will be locked until your commit
Note: you will want to wrap any work you are doing in a transaction with a BEGIN/COMMIT block. This will avoid the locking issue entirely.

SQL Server Insert query for a forum

Considering a forum table and many users simultaneously inserting messages into it, how safe is this transaction?
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION
DECLARE #LastMessageId SMALLINT
SELECT #LastMessageId = MAX(MessageId)
FROM Discussions
WHERE ForumId = #ForumId AND DiscussionId = #DiscussionId
INSERT INTO Discussions
(ForumId, DiscussionId, MessageId, ParentId, MessageSubject, MessageBody)
VALUES
(#ForumId, #DiscussionId, #LastMessageId + 1, #ParentId, #MessageSubject, #MessageBody)
IF ##ERROR = 0
BEGIN
COMMIT TRANSACTION
RETURN 0
END
ROLLBACK TRANSACTION
RETURN 1
Here I read last MessageId and increment it. I can't use Identity field because it needs to be incremented for every message inserted in a group (not every message insert into table.)
Your transaction should be quite safe indeed - check out the MSDN docs on the SERIALIZABLE transaction level:
SERIALIZABLE
Specifies the following:
Statements cannot read data that has been modified but not yet
committed by other transactions.
No other transactions can modify data that has been read by the
current transaction until the current
transaction completes.
Other transactions cannot insert new rows with key values that
would fall in the range of keys read
by any statements in the current
transaction until the current
transaction completes.
Range locks are placed in the range of key values that match the
search conditions of each statement
executed in a transaction. This blocks
other transactions from updating or
inserting any rows that would qualify
for any of the statements executed by
the current transaction. This means
that if any of the statements in a
transaction are executed a second
time, they will read the same set of
rows. The range locks are held until
the transaction completes. This is the
most restrictive of the isolation
levels because it locks entire ranges
of keys and holds the locks until the
transaction completes. Because
concurrency is lower, use this option
only when necessary. This option has
the same effect as setting HOLDLOCK on
all tables in all SELECT statements in
a transaction.
The main problem with this transaction isolation level is that it's a pretty heavy load on the server, and serializes (as the name implies) any access, so your server performance and scalability will suffer, e.g. with very high numbers of users, you'll possibly get lots of timeouts for users waiting for a transaction to finish.
So using the more lightweight approach of a global message id as INT IDENTITY is definitely much better!