How do I use a database to manage a semaphore? - sql

If several instances of the same code are running on different servers, I would like to use a database to make sure a process doesn't start on one server if it's already running on another server.
I could probably come-up with some workable SQL commands that used Oracle transaction processing, latches, or whatever, but I'd rather find something that's been tried and true.
Years ago a developer that was a SQL wiz had a single SQL transaction that took the semaphore and returned true if it got it, and returned false if it didn't get it. Then at the end of my processing, I'd need to run another SQL transaction to release the semaphore. It would be cool, but I don't know if it's possible for a database-supported semaphore to have a time-out. That would be a huge bonus to have a timeout!
EDIT:
Here are what might be some workable SQL commands, but no timeout except through a cron job hack:
---------------------------------------------------------------------
--Setup
---------------------------------------------------------------------
CREATE TABLE "JOB_LOCKER" ( "JOB_NAME" VARCHAR2(128 BYTE), "LOCKED" VARCHAR2(1 BYTE), "UPDATE_TIME" TIMESTAMP (6) );
CREATE UNIQUE INDEX "JOB_LOCKER_PK" ON "JOB_LOCKER" ("JOB_NAME") ;
ALTER TABLE "JOB_LOCKER" ADD CONSTRAINT "JOB_LOCKER_PK" PRIMARY KEY ("JOB_NAME");
ALTER TABLE "JOB_LOCKER" MODIFY ("JOB_NAME" NOT NULL ENABLE);
ALTER TABLE "JOB_LOCKER" MODIFY ("LOCKED" NOT NULL ENABLE);
insert into job_locker (job_name, locked) values ('myjob','N');
commit;
---------------------------------------------------------------------
--Execute at the beginning of the job
--AUTOCOMMIT MUST BE OFF!
---------------------------------------------------------------------
select * from job_locker where job_name='myjob' and locked = 'N' for update NOWAIT;
--returns one record if it's ok. Otherwise returns ORA-00054. Any other thread attempting to get the record gets ORA-00054.
update job_locker set locked = 'Y', update_time = sysdate where job_name = 'myjob';
--1 rows updated. Any other thread attempting to get the record gets ORA-00054.
commit;
--Any other thread attempting to get the record with locked = 'N' gets zero results.
--You could have code to pull for that job name and locked = 'Y' and if still zero results, add the record.
---------------------------------------------------------------------
--Execute at the end of the job
---------------------------------------------------------------------
update job_locker set locked = 'N', update_time = sysdate where job_name = 'myjob';
--Any other thread attempting to get the record with locked = 'N' gets no results.
commit;
--One record returned to any other thread attempting to get the record with locked = 'N'.
---------------------------------------------------------------------
--If the above 'end of the job' fails to run (system crash, etc)
--The 'locked' entry would need to be changed from 'Y' to 'N' manually
--You could have a periodic job to look for old timestamps and locked='Y'
--to clear those.
---------------------------------------------------------------------

You should look into DBMS_LOCK. Essentially, it allows for the enqueue locking mechanisms that Oracle uses internally, except that it allows you to define a lock type of 'UL' (user lock). Locks can be held shared or exclusive, and a request to take a lock, or to convert a lock from one mode to another, support a timeout.
I think it will do what you want.
Hope that helps.

Related

How to read all "last_changed" records from Firebird DB?

My question is a bit tricky, because it's mostly a logical problem.
I've tried to optimize my app speed by reading everything into memory but only those records, which changed since "last read" = greatest timestamp of records last time loaded.
FirebirdSQL database engine does not allow to update a field in an "After Trigger" directly, so it's obviously using "before update or insert" triggers to update the field new.last_changed = current_timestamp;
The problem:
As it turns out, this is a totally WRONG method, because those triggers fire on transaction start!
So if there is a transaction that takes some more time than an other, the saved "last changed time" will be lower than a short-burst transaction fired and finished in between.
1. tr.: 13:00:01.400 .............................Commit << this record will be skipped !
2. tr.: 13:00.01.500......Commit << reading of data will happen here.
The next read will be >= 13:00.01.500
I've tried:
to rewrite all triggers, so they fire after and call an UPDATE orders SET ... << but this causing circular, self-calling endless events.
Would a SET_CONTEXT lock interfere with multi-row update and nested triggers?
(I do not see any possibility this method would work good if running multiple updates in the same transaction.)
What is the common solution for all this?
Edit1:
What I want to happen is to read only those records from DB actually changed since last read. For that to happen, I need the engine to update records AFTER COMMIT. (Not during it, "in the middle".)
This trigger is NOT good, because it will fire on the moment of change, (not after Commit):
alter trigger SYNC_ORDERS active after insert or update position 999 AS
declare variable N timestamp;
begin
N = cast('NOW' as timestamp);
if (new.last_changed <> :N) then
update ORDERS set last_changed= :N where ID=new.ID;
end
And from the application I do:
Query1.SQL.Text := 'SELECT * FROM orders WHERE last_changed >= ' + DateTimeToStr( latest_record );
Query1.Open;
latest_record := Query1.FieldByName('last_changed').asDateTime;
.. this code will list only the record commited in the 2th transaction (earlier) and never the first, longer running transaction (commited later).
Edit2:
It seems I have the same question as here... , but specially for FirebirdSQL.
There are not really any good solutions there, but gave me an idea:
- What if I create an extra table and log changes earlier than 5 minutes there per table?
- Before each SQL query, first I will ask for any changes in that table, sequenced via ID grow!
- Delete lines older than 23 hours
ID TableID Changed
===========================
1 5 2019.11.27 19:36:21
2 5 2019.11.27 19:31:19
Edit3:
As Arioch already suggested, one solution is to:
create a "logger table" filled on every BEFORE INSERT OR UPDATE
trigger by every table
and update the "last_changed" sequence of it
by the ON TRANSACTION COMMIT trigger
But, would not be ...
a better approach?:
adding 1-1 last_sequence INT64 DEFAULT NULL column to every table
create a global generator LAST_GEN
update every table's every NULL row with a gen_id(LAST_GEN,1) inside the ON TRANSACTION COMMIT trigger
SET to NULL again on every BEFORE INSERT OR UPDATE trigger
So basically switching the last_sequence column of a record to:
NULL > 1 > NULL > 34 ... every time it gets modified.
This way I :
do not have to fill the DB with log data,
and I can query the tables directly with WHERE last_sequence>1;.
No needed to pre-query the "logger table" first.
I'm just afraid: WHAT happens, if the ON TRANSACTION COMMIT trigger is trying to update a last_sequence field, while a 2th transaction's ON BEFORE trigger is locking the record (of an other table)?
Can this happen at all?
The final solution is based on the idea, that:
Each table's BEFORE INSERT OR UPDATE trigger can push a time of the transaction: RDB$SET_CONTEXT('USER_TRANSACTION', 'table31', current_timestamp);
The global ON TRANSACTION COMMIT trigger can insert a sequence + time into a "logging table", if receiving such a context.
It can even take care of "daylight saving changes" and "intervals", by logging only "big time differences", like >=1 minute, to reduce the amount of records.)
A stored procedure can ease and speed up the calculation of 'LAST_QUERY_TIME' of each query's.
Example:
1.)
create trigger ORDERS_BI active before insert or update position 0 AS
BEGIN
IF (NEW.ID IS NULL) THEN
NEW.ID = GEN_ID(GEN_ORDERS,1);
RDB$SET_CONTEXT('USER_TRANSACTION', 'orders_table', current_timestamp);
END
2, 3.)
create trigger TRG_SYNC_AFTER_COMMIT ACTIVE ON transaction commit POSITION 1 as
declare variable N TIMESTAMP;
declare variable T VARCHAR(255);
begin
N = cast('NOW' as timestamp);
T = RDB$GET_CONTEXT('USER_TRANSACTION', 'orders_table');
if (:T is not null) then begin
if (:N < :T) then T = :N; --system time changed eg.: daylight saving" -1 hour
if (datediff(second from :T to :N) > 60 ) then --more than 1min. passed
insert into "SYNC_PAST_TIMES" (ID, TABLE_NUMBER, TRG_START, SYNC_TIME, C_USER)
values (GEN_ID(GEN_SYNC_PAST_TIMES, 1), 31, cast(:T as timestamp), :N, CURRENT_USER);
end;
-- other tables too:
T = RDB$GET_CONTEXT('USER_TRANSACTION', 'details_table');
-- ...
when any do EXIT;
end
Edit1:
It is possible to speed up the readout of the "last-time-changed" value from our SYNC_PAST_TIMES table with a help of a Stored Procedure. Logically, You have to store in memory both the ID PT_ID + the time PT_TM in your program to call it for each table.
CREATE PROCEDURE SP_LAST_MODIF_TIME (
TABLE_NUMBER SM_INT,
LAST_PASTTIME_ID BG_INT,
LAST_PASTTIME TIMESTAMP)
RETURNS (
PT_ID BG_INT,
PT_TM TIMESTAMP)
AS
declare variable TEMP_TIME TIMESTAMP;
declare variable TBL SMALLINT;
begin
PT_TM = :LAST_PASTTIME;
FOR SELECT p.ID, p.SYNC_TIME, p.TABLA FROM SYNC_PAST_TIMES p WHERE (p.ID > :LAST_PASTTIME_ID)
ORDER by p.ID ASC
INTO PT_ID, TEMP_TIME, TBL DO --the PT_ID gets an increasing value immediately
begin
if (:TBL = :TABLE_NUMBER) then
if (:TEMP_TIME< :MI_TIME) then
PT_TM = :TEMP_TIME; --searching for the smallest
end
if (:PT_ID IS NULL) then begin
PT_ID = :LAST_PASTTIME_ID;
PT_TM = :LAST_PASTTIME;
end
suspend;
END
You can use this procedure by including in your select, using the WITH .. AS format:
with UTLS as (select first 1 PT_ID, PT_TM from SP_LAST_MODIF_TIME (55, -- TABLE_NUMBER
0, '1899.12.30 00:00:06.000') ) -- last PT_ID, PT_TM from your APP
select first 1000 u.PT_ID, current_timestamp as NOWWW, r.*
from UTLS u, "Orders" r
where (r.SYNC_TIME >= u.PT_TM);
Using FIRST 1000 is a must to prevent reading the whole table if all values are changed at once.
Upgrading the SQL, adding a new column, etc. makes SYNC_TIME changing to NOW at the same time at all rows of the table.
You may adjust it per table individually, just like the interval of seconds to monitor changes. Add a check to your APP, how to handle the case, if the new data reaches 1000 lines at once ...

Ensure data consistency when same stored procedure is called by windows service in interval of few seconds

We have a stored procedure which returns the list of the pending items which need to be processed. Now there is a window service which calls a stored procedure in intervals of 20 seconds to get the pending items for further processing.
There is a column QueryTimestamp in the Pending table. For the pending items the QueryTimestamp column is null. Once selected by the a stored procedure, the column QueryTimestamp is updated with current date time.
The body is as followed. No explicit transaction has been used. SQL Server default isolation level is applicable.
DECLARE #workerPending TABLE
(
RowNum INT IDENTITY PRIMARY KEY,
[PendingId] BIGINT,
[CreatedDate] DATETIME
)
INSERT INTO #workerPending ([PendingId], [CreatedDate])
SELECT
[p].[PendingId] AS [PendingId],
[p].CreatedDate
FROM
[pending] [p]
WHERE
[p].QueryTimestamp IS NULL
ORDER BY
[p].[PendingId]
--Update pending table with current date time
UPDATE Pnd
SET QueryTimestamp = GETDATE()
FROM [Pending] Pnd
INNER JOIN #workerPending [wp] ON [wp].[PendingId] = Pnd.[PendingId]
If the stored procedure is not able to process the first request in 20 seconds due to huge data, windows service sends another call to the stored procedure, and it starts processing both the requests.
Concern is: does this causes both the requests have some duplicate pending records?
Do we need to implement LOCK in the pending table ?
Please suggest how we can ensure data consistency? SO that if another request comes to the stored procedure while the previous request is still in progress, no duplicate record should be returned.
EDIT : Other windows service is there which calls another SP which inserts records into the Pending table and mark "QueryTimestamp" with null.
Simple but effective solution is when the service wants to call the SP do this :
Read in table Settings a value that tells you if the thread is already running
If not already running then
begin
Write to table Settings that the thread has started
Commit this update
Call your SP
Write to table Settings that the thread has finished
Commit this update
end
You can do the UPDATE and SELECT in a single step with an OUTPUT clause. EG
UPDATE pending
SET QueryTimestamp = GETDATE()
output inserted.PendingId, inserted.CreatedDate
into #workerPending(PendingId,CreatedDate)
WHERE QueryTimestamp IS NULL
Or a more robust pattern that allows you to limit and order the results and concurrently retrieve them is to use a transaction and lock hints on the SELECT, eg:
begin transaction
INSERT INTO #workerPending ([PendingId], [CreatedDate])
SELECT top 100
[p].[PendingId] AS [PendingId],
[p].CreatedDate
FROM
[pending] [p] with (updlock, rowlock, readpast)
WHERE
[p].QueryTimestamp IS NULL
ORDER BY
[p].[PendingId];
UPDATE pending
SET QueryTimestamp = GETDATE()
where PendingIdin (select PendingId from #workerPending )
commit transaction
See Using tables as Queues

SQL: CHANGE TRACKING FOR ENTITY

QUESTION:
What approach should I use to notify one databases about the changes made to a table in another database. Note: I need one notification per statement level event, this includes the merge statement which does a insert, update and delete in one.
BACKGROUND:
We're working with a third party to transfer some data from one system to another. There are two databases of interest here, one which the third party populates with normalised staging data and a second database which will be populated with the de-normalised post processed data. I've created MERGE scripts which do the heavy lifting of processing and transferral of the data from these staging tables into our shiny denormalised version, and I've written a framework which manages the data dependencies such that look-up tables are populated prior to the main data etc.
I need a reliable way to be notified of when the staging tables are updated so that my import scripts are run autonomously.
METHODS CONSIDERED:
SQL DML Triggers
I initially created a generic trigger which sends change information to the denormalised database via service broker, however this trigger is firing three times, once for insert, update and delete and is thus sending three distinct messages which is causing the import process to run three times for a single data change. It should be noted that these staging tables are also being updated using the MERGE functionality within SQL Server, so is handled in a single statement.
SQL Query Notification
This appears to be perfect for what I need, however there doesn't appear to be anyway to subscribe to notifications from within SQL Server, this can only be of used to notify change at an application layer written in .net. I guess I maybe able to manage this via CLR integration, however I'd still need to drive the notification down to the processing database to trigger the import process. This appears to be my best option although it will be long winded, difficult to debug and monitor, and probably over complicating an otherwise simple issue.
SQL Event Notification
This would be perfect although doesn't appear to function for DML, regardless of what you might find in the MS documentation. The create event notification command takes a single parameter for event_type so can be thought of as operating at the database level. DML operates at an entity level and there doesn't appear to be anyway to target a specific entity using the defined syntax.
SQL Change Tracking
This appears to capture changes on a database table but at a row level and this seems to be too heavy handed for what I require. I simply need to know that a change has happened, I'm not really interested in which rows or how many, besides I'd still need to convert this into an event to trigger the import process.
SQL Change Data Capture
This is an extension of Change Tracking and records both the change and the history of the change at the row level. This is again far too detailed and still leaves me with the issue of turning this into a notification of some kind so that import process can be kicked off.
SQL Server Default Trace / Audit
This appears to require a target which must be of either a Windows Application / Security event log or a file on the IO which I'd struggle to monitor and hook into for changes.
ADDITIONAL
My trigger based method would work wonderfully if only the trigger was fired once. I have considered creating a table to record the first of the three DML commands which could then be used to suspend the posting of information within the other two trigger operations, however I'm reasonable sure that all three DML triggers (insert, update delete) will fire in parallel rending this method futile.
Can anyone please advise on a suitable approach that ideally doesn't use a scheduled job to check for changes. Any suggestions gratefully received.
This simplest approach has been to create a secondary table to record when the trigger code is run.
CREATE TABLE [service].[SuspendTrigger]
(
[Index] [int] IDENTITY(1,1) NOT NULL,
[Name] [nvarchar](200) NOT NULL,
[DateTime] [datetime] NOT NULL,
[SPID] [int] NOT NULL,
CONSTRAINT [pk_suspendtrigger_index] PRIMARY KEY CLUSTERED
(
[Index] ASC
) ON [PRIMARY]
) ON [PRIMARY]
Triggers run sequentially so even when a merge statement is applied to an existing table the insert, update and delete trigger code run one after the other.
The first time we enter the trigger we can therefore write to this suspension table to record the event and then execute what ever code needs to be executed.
The second time we enter the trigger we can check to see if a record already exists and therefore prevent execution of any further statements.
alter trigger [dbo].[trg_ADDRESSES]
on [dbo].[ADDRESSES]
after insert, update, delete
as
begin
set nocount on;
-- determine the trigger action - not trigger may fire
-- when nothing in either update or delete table
------------------------------------------------------
declare #action as nvarchar(6) = (case when ( exists ( select top 1 1 from inserted )
and exists ( select top 1 1 from deleted )) then N'UPDATE'
when exists ( select top 1 1 from inserted ) then N'INSERT'
when exists ( select top 1 1 from deleted ) then N'DELETE'
end)
-- check for valid action
-------------------------
if #action is not null
begin
if not exists ( select *
from [service].[SuspendTrigger] as [suspend]
where [suspend].[SPID] = ##SPID
and [suspend].[DateTime] >= dateadd(millisecond, -300, getdate())
)
begin
-- insert a suspension event
-----------------------------
insert into [service].[SuspendTrigger]
(
[Name] ,
[DateTime] ,
[SPID]
)
select object_name(##procid) as [Name] ,
getdate() as [DateTime] ,
##SPID as [SPID]
-- determine the message content to send
----------------------------------------
declare #content xml = (
select getdate() as [datetime] ,
db_name() as [source/catelogue] ,
'DBO' as [source/table] ,
'ADDRESS' as [source/schema] ,
(select [sessions].[session_id] as [#id] ,
[sessions].[login_time] as [login_time] ,
case when ([sessions].[total_elapsed_time] >= 864000000000) then
formatmessage('%02i DAYS %02i:%02i:%02i.%04i',
(([sessions].[total_elapsed_time] / 10000 / 1000 / 60 / 60 / 24)),
(([sessions].[total_elapsed_time] / (1000*60*60)) % 24),
(([sessions].[total_elapsed_time] / (1000*60)) % 60),
(([sessions].[total_elapsed_time] / (1000*01)) % 60),
(([sessions].[total_elapsed_time] ) % 1000))
else
formatmessage('%02i:%02i:%02i.%i',
(([sessions].[total_elapsed_time] / (1000*60*60)) % 24),
(([sessions].[total_elapsed_time] / (1000*60)) % 60),
(([sessions].[total_elapsed_time] / (1000*01)) % 60),
(([sessions].[total_elapsed_time] ) % 1000))
end as [duration] ,
[sessions].[row_count] as [row_count] ,
[sessions].[reads] as [reads] ,
[sessions].[writes] as [writes] ,
[sessions].[program_name] as [identity/program_name] ,
[sessions].[host_name] as [identity/host_name] ,
[sessions].[nt_user_name] as [identity/nt_user_name] ,
[sessions].[login_name] as [identity/login_name] ,
[sessions].[original_login_name] as [identity/original_name]
from [sys].[dm_exec_sessions] as [sessions]
where [sessions].[session_id] = ##SPID
for xml path('session'), type)
for xml path('persistence_change'), root('change_tracking'))
-- holds the current procedure name
-----------------------------------
declare #procedure_name nvarchar(200) = object_name(##procid)
-- send a message to any remote listeners
-----------------------------------------
exec [service].[usp_post_content_via_service_broker] #MessageContentType = 'Source Data Change', #MessageContent = #content, #CallOriginator = #procedure_name
end
end
end
GO;
All we need to do now is create an index on the [datetime] field within the suspension table so that this is used during the check. I'll probably also create a job to clear down any entries older than a couple of minutes to try to keep the contents down.
Either way, this provides a way of ensuring that only on notification is generated per table level modification.
if your interested the message contents will look something like this ...
<change_tracking>
<persistence_change>
<datetime>2016-08-01T16:08:10.880</datetime>
<source>
<catelogue>[MY DATABASE NAME]</catelogue>
<table>DBO</table>
<schema>ADDRESS</schema>
</source>
<session id="1014">
<login_time>2016-08-01T15:03:01.993</login_time>
<duration>00:00:01.337</duration>
<row_count>1</row_count>
<reads>37</reads>
<writes>68</writes>
<identity>
<program_name>Microsoft SQL Server Management Studio - Query</program_name>
<host_name>[COMPUTER NAME]</host_name>
<nt_user_name>[MY ACCOUNT]</nt_user_name>
<login_name>[MY DOMAIN]\[MY ACCOUNT]</login_name>
<original_name>[MY DOMAIN]\[MY ACCOUNT]</original_name>
</identity>
</session>
</persistence_change>
</change_tracking>
I could send over the action that triggered the notification but I'm only really interested in the fact that some data has changed in this table.

Select for Update Lock

I want to prevent simultaneous update(by multiple sessions) for my record in my stored procedure.
1.I am , using SELECT FOR UPDATE statement for the particular row , which i want to update it.
This will lock the record.
I am updating this record now and then commit it. So the lock is released and now the record is available for another user/session to work on with.
However , when i try to run the procedure , i am finding the simultaneous update is happening , means SELECT FOR UPDATE not working fine.
Pls provide some suggestions.
Sample Code is below :
IF THEN
// do something
ELSIF THEN
BEGIN
SELECT HIGH_NBR INTO P_NBR FROM ROUTE
WHERE LC_CD = <KL_LCD> AND ROUTE_NBR = <KL_ROUTE_NBR>
FOR UPDATE OF HIGH_NBR ;
UPDATE ROUTE SET HIGH_NBR = (HIGH_NBR + 1)
WHERE LC_CD = <KL_LCD> AND ROUTE_NBR = <KL_ROUTE_NBR>;
COMMIT;
END;
END IF;
In multiple user environment , i am observing the SELECT FOR UPDATE lock is not happening.
I just tested the scenario with two different computers (Sessions). Here is what i have did.
From One computer , executed SELECT FOR UPDATE statement -- Locking a row.
From Another computer , execute an UPDATE statement for the same record.
Update did not happen and the Sql execution of update statement is not completed , even after a long time.
When will be the lock released , if we issue an SELECT FOR UPDATE for a record.
first of all you need to set auto commit false before starting the query.
and to check your code is working you can use two Java treads with a cyclic barrier
and also you should add timestamps on you code to check the time of the code being reached.

SQL Server 2005 Deadlock Problem

I’m running into a deadlock problem when trying to lock some records so that no process (Windows service) picks the items to service them, then update the status and then return a recordset.
Can you please let me know why am I getting the deadlock issue when this proc is invoked?
CREATE PROCEDURE [dbo].[sp_LoadEventsTemp]
(
#RequestKey varchar(20),
#RequestType varchar(20),
#Status varchar(20),
#ScheduledDate smalldatetime = null
)
AS
BEGIN
declare #LoadEvents table
(
id int
)
BEGIN TRANSACTION
if (#scheduledDate is null)
Begin
insert into #LoadEvents (id)
(
Select eventqueueid FROM eventqueue
WITH (HOLDLOCK, ROWLOCK)
WHERE requestkey = #RequestKey
and requesttype = #RequestType
and [status] = #Status
)
END
else
BEGIN
insert into #LoadEvents (id)
(
Select eventqueueid FROM eventqueue
WITH (HOLDLOCK, ROWLOCK)
WHERE requestkey = #RequestKey
and requesttype = #RequestType
and [status] = #Status
and (convert(smalldatetime,scheduleddate) <= #ScheduledDate)
)
END
update eventqueue set [status] = 'InProgress'
where eventqueueid in (select id from #LoadEvents)
IF ##Error 0
BEGIN
ROLLBACK TRANSACTION
END
ELSE
BEGIN
COMMIT TRANSACTION
select * from eventqueue
where eventqueueid in (select id from #LoadEvents)
END
END
Thanks in advance.
Do you have a non-clustered index defined as:
CREATE NONCLUSTERED INDEX NC_eventqueue_requestkey_requesttype_status
ON eventqueue(requestkey, requesttype, status)
INCLUDE eventqueueid
and another on eventqueueid?
BTW the conversion of column scheduleddate to smalldatetime type will prevent any use of an index on that column.
First of all, as you're running SQL Server I'd recommend you to intall Performance Dashboard which is a very handy tool to identify what locks are currently being made on the server.
Performance Dahsboard Link
Second, take a trace of your SQL Server using SQL Profiler (Already Installed) and make sure that you select on the Events Selection the item Locks > Deadlock graph which will show what is causing the deadlock.
You got to have very clear on your mind what a deadlock is to start troubleshooting it.
When any access is made to any table or row on the DB a lock is made.
Lets call SPID 51 and SPID 52 (SPID = SQL Process ID)
SPID 51 locks Cell A
SPID 52 locks Cell B
if on the same transaction SPID 51 requests for the Cell B, it'll wait SPID 52 till it releases it.
if on the same transaction SPID 52 requests for the Cell A, you got a deadlock because this situation will never finish (51 waiting for 52 and 52 for 51)
Got to tell you that it ain't easy to troubleshoot, but you you to dig deeper to find the resolution
Deadlocks happen most often (in my experience) when differnt resources are locked within differnt transactions in different orders.
Imagine 2 processes using resource A and B, but locking them in different orders.
- Process 1 locks resource A, then resource B
- Process 2 locks resource B, then resource A
The following then becomes possible:
- Process 1 locks resource A
- Process 2 locks resource B
- Process 1 tries to lock resource B, then stops and waits as Process 2 has it
- Process 2 tries to lock resource A, then stops and waits as Process 1 has it
- Both proceses are waiting for each other, Deadlock
In your case we would need to see exactly where the SP falls over due to a deadlock (the update I'd guess?) and any other processes that reference that table. It could be an trigger or something, which then gets deadlocked on a different table, not the table you're updating.
What I would do is use SQL Server 2005's OUTPUT syntaxt to avoid having to use the transaction...
UPDATE
eventqueue
SET
status = 'InProgress'
WHERE
requestkey = #RequestKey
AND requesttype = #RequestType
AND status = #Status
AND (convert(smalldatetime,scheduleddate) <= #ScheduledDate OR #ScheduledDate IS NULL)
OUTPUT
inserted.*