SQL Compact Lock Timeout When Update Contains a Subquery - locking

I am using SQL Server Compact 3.5 SP2 (3.5.8085.0) and am having trouble getting a update statement to complete.
Below is my update statement:
UPDATE MyTable
SET ColumnB = 'SomeOtherValue'
WHERE ColumnA IN (
SELECT TOP(90000)
ColumnA
FROM MyTable
WHERE ColumnB = 'SomeValue'
ORDER BY ColumnA
)
This query will run for anywhere between 5 seconds and 1+ hour. Very rarely it will finish, but most of the time I get:
Major Error 0x80004005, Minor Error 25090
SQL Server Compact timed out waiting for a lock. The default lock time
is 2000ms for devices and 5000ms for desktops. The default lock
timeout can be increased in the connection string using the ssce:
default lock timeout property. [ Session id = 1,Thread id =
6876,Process id = 5712,Table name = MyTable,Conflict type = iu lock (u
blocks),Resource = TAB ]
I am positive there are no other connections to this database. Only my SSMS query.
Thoughts?

Related

WAIT or NOWAIT? That is the question. PosgreSQL vs ORACLE

I am migrating from Oracle to postgreSQL and I have a question.
I have used to use the query for oracle like this:
SELECT id FROM table_name WHERE id = '123' FOR UPDATE WAIT 30"
As far as I understand in postgreSQL we have only NOWAT options, so I have changed query like this:
SELECT id FROM table_name WHERE id = '123' FOR UPDATE"
Question is, how can I populate some lock timeout? I saw that I could send additional queries, for example
set lock_timeout = 30000; or set lock_timeout = ‘30s’;
select for update ...
set lock_timeout = 0;
However in this case I am adding 2 additional queries, but I don't want to. Is there any other way to populate some lock timeout?
There is possibility to configure lock timeout at posgreSQL server configuration.
Change parameter lock_timeout = '30s' in pgsql/11/data/postgresql.conf
After that reload configuration or restart postgreSQL

A distributed transaction is waiting for lock

I am trying to copy all the values from a column OLD_COL into another column NEW_COL inside the same table.
To achieve the result I want, I wrote down the following UPDATE in Oracle:
UPDATE MY_TABLE
SET NEW_COL = OLD_COL
WHERE NEW_COL IS NULL;
where MY_TABLE is a big table composed of 400.000 rows.
When I try to run it, it fails with the error:
QL Error: ORA-02049: timeout: distributed transaction waiting for lock
02049. 00000 - "timeout: distributed transaction waiting for lock"
*Cause: exceeded INIT.ORA distributed_lock_timeout seconds waiting for lock.
*Action: treat as a deadlock
I tried so to run the following query for updating one row alone:
UPDATE MY_TABLE
SET NEW_COL = OLD_COL
WHERE ID = '1'
and this works as intended.
Therefore, why can't I update all the rows in my table? Why is this error showing up?
Because there are too many row in your Table, When you UPDATE table will be lock.
oracle default it set to 60 seconds. if your excute time over 60 seconds will be error.
You can try to set up timeout value
ALTER SYSTEM SET distributed_lock_timeout=120;
or disable it.
ALTER SYSTEM DISABLE DISTRIBUTED RECOVERY;
https://docs.oracle.com/cd/A84870_01/doc/server.816/a76960/ds_txnma.htm
Note:
Remember : While running any ALTER SYSTEM Command you need to restart the instance.

SQL Server 2000 - Update rows and return updated rows

I am wondering how to rewrite the following SQL Server 2005/2008 script for SQL Server 2000 which didn't have OUTPUT yet.
Basically, I would like to update rows and return the updated rows without creating deadlocks.
Thanks in advance!
UPDATE TABLE
SET Locked = 1
OUTPUT INSERTED.*
WHERE Locked = 0
You can't in SQL Server 2000 cleanly
What you can do is use a transaction and some lock hints to prevent a race condition. Your main problem is 2 processes accessing the same row(s), not a deadlock. See SQL Server Process Queue Race Condition for more, please.
BEGIN TRANSACTION
SELECT * FROM TABLE WITH (ROWLOCK, READPAST, UPDLOCK) WHERE Locked = 0
UPDATE TABLE
SET Locked = 1
WHERE Locked = 0
COMMIT TRANSACTION
I haven't tried this, but you could also try a SELECT in an UPDATE trigger from INSERTED.

SQL Server 2005 Deadlock Problem

I’m running into a deadlock problem when trying to lock some records so that no process (Windows service) picks the items to service them, then update the status and then return a recordset.
Can you please let me know why am I getting the deadlock issue when this proc is invoked?
CREATE PROCEDURE [dbo].[sp_LoadEventsTemp]
(
#RequestKey varchar(20),
#RequestType varchar(20),
#Status varchar(20),
#ScheduledDate smalldatetime = null
)
AS
BEGIN
declare #LoadEvents table
(
id int
)
BEGIN TRANSACTION
if (#scheduledDate is null)
Begin
insert into #LoadEvents (id)
(
Select eventqueueid FROM eventqueue
WITH (HOLDLOCK, ROWLOCK)
WHERE requestkey = #RequestKey
and requesttype = #RequestType
and [status] = #Status
)
END
else
BEGIN
insert into #LoadEvents (id)
(
Select eventqueueid FROM eventqueue
WITH (HOLDLOCK, ROWLOCK)
WHERE requestkey = #RequestKey
and requesttype = #RequestType
and [status] = #Status
and (convert(smalldatetime,scheduleddate) <= #ScheduledDate)
)
END
update eventqueue set [status] = 'InProgress'
where eventqueueid in (select id from #LoadEvents)
IF ##Error 0
BEGIN
ROLLBACK TRANSACTION
END
ELSE
BEGIN
COMMIT TRANSACTION
select * from eventqueue
where eventqueueid in (select id from #LoadEvents)
END
END
Thanks in advance.
Do you have a non-clustered index defined as:
CREATE NONCLUSTERED INDEX NC_eventqueue_requestkey_requesttype_status
ON eventqueue(requestkey, requesttype, status)
INCLUDE eventqueueid
and another on eventqueueid?
BTW the conversion of column scheduleddate to smalldatetime type will prevent any use of an index on that column.
First of all, as you're running SQL Server I'd recommend you to intall Performance Dashboard which is a very handy tool to identify what locks are currently being made on the server.
Performance Dahsboard Link
Second, take a trace of your SQL Server using SQL Profiler (Already Installed) and make sure that you select on the Events Selection the item Locks > Deadlock graph which will show what is causing the deadlock.
You got to have very clear on your mind what a deadlock is to start troubleshooting it.
When any access is made to any table or row on the DB a lock is made.
Lets call SPID 51 and SPID 52 (SPID = SQL Process ID)
SPID 51 locks Cell A
SPID 52 locks Cell B
if on the same transaction SPID 51 requests for the Cell B, it'll wait SPID 52 till it releases it.
if on the same transaction SPID 52 requests for the Cell A, you got a deadlock because this situation will never finish (51 waiting for 52 and 52 for 51)
Got to tell you that it ain't easy to troubleshoot, but you you to dig deeper to find the resolution
Deadlocks happen most often (in my experience) when differnt resources are locked within differnt transactions in different orders.
Imagine 2 processes using resource A and B, but locking them in different orders.
- Process 1 locks resource A, then resource B
- Process 2 locks resource B, then resource A
The following then becomes possible:
- Process 1 locks resource A
- Process 2 locks resource B
- Process 1 tries to lock resource B, then stops and waits as Process 2 has it
- Process 2 tries to lock resource A, then stops and waits as Process 1 has it
- Both proceses are waiting for each other, Deadlock
In your case we would need to see exactly where the SP falls over due to a deadlock (the update I'd guess?) and any other processes that reference that table. It could be an trigger or something, which then gets deadlocked on a different table, not the table you're updating.
What I would do is use SQL Server 2005's OUTPUT syntaxt to avoid having to use the transaction...
UPDATE
eventqueue
SET
status = 'InProgress'
WHERE
requestkey = #RequestKey
AND requesttype = #RequestType
AND status = #Status
AND (convert(smalldatetime,scheduleddate) <= #ScheduledDate OR #ScheduledDate IS NULL)
OUTPUT
inserted.*

The best way to use a DB table as a job queue (a.k.a batch queue or message queue)

I have a databases table with ~50K rows in it, each row represents a job that need to be done. I have a program that extracts a job from the DB, does the job and puts the result back in the db. (this system is running right now)
Now I want to allow more than one processing task to do jobs but be sure that no task is done twice (as a performance concern not that this will cause other problems). Because the access is by way of a stored procedure, my current though is to replace said stored procedure with something that looks something like this
update tbl
set owner = connection_id()
where available and owner is null limit 1;
select stuff
from tbl
where owner = connection_id();
BTW; worker's tasks might drop there connection between getting a job and submitting the results. Also, I don't expect the DB to even come close to being the bottle neck unless I mess that part up (~5 jobs per minute)
Are there any issues with this? Is there a better way to do this?
Note: the "Database as an IPC anti-pattern" is only slightly apropos here because
I'm not doing IPC (there is no process generating the rows, they all already exist right now) and
the primary gripe described for that anti-pattern is that it results in unneeded load on the DB as processes wait for messages (in my case, if there are no messages, everything can shutdown as everything is done)
The best way to implement a job queue in a relational database system is to use SKIP LOCKED.
SKIP LOCKED is a lock acquisition option that applies to both read/share (FOR SHARE) or write/exclusive (FOR UPDATE) locks and is widely supported nowadays:
Oracle 10g and later
PostgreSQL 9.5 and later
SQL Server 2005 and later
MySQL 8.0 and later
Now, consider we have the following post table:
The status column is used as an Enum, having the values of:
PENDING (0),
APPROVED (1),
SPAM (2).
If we have multiple concurrent users trying to moderate the post records, we need a way to coordinate their efforts to avoid having two moderators review the same post row.
So, SKIP LOCKED is exactly what we need. If two concurrent users, Alice and Bob, execute the following SELECT queries which lock the post records exclusively while also adding the SKIP LOCKED option:
[Alice]:
SELECT
p.id AS id1_0_,1
p.body AS body2_0_,
p.status AS status3_0_,
p.title AS title4_0_
FROM
post p
WHERE
p.status = 0
ORDER BY
p.id
LIMIT 2
FOR UPDATE OF p SKIP LOCKED
[Bob]:
SELECT
p.id AS id1_0_,
p.body AS body2_0_,
p.status AS status3_0_,
p.title AS title4_0_
FROM
post p
WHERE
p.status = 0
ORDER BY
p.id
LIMIT 2
FOR UPDATE OF p SKIP LOCKED
We can see that Alice can select the first two entries while Bob selects the next 2 records. Without SKIP LOCKED, Bob lock acquisition request would block until Alice releases the lock on the first 2 records.
Here's what I've used successfully in the past:
MsgQueue table schema
MsgId identity -- NOT NULL
MsgTypeCode varchar(20) -- NOT NULL
SourceCode varchar(20) -- process inserting the message -- NULLable
State char(1) -- 'N'ew if queued, 'A'(ctive) if processing, 'C'ompleted, default 'N' -- NOT NULL
CreateTime datetime -- default GETDATE() -- NOT NULL
Msg varchar(255) -- NULLable
Your message types are what you'd expect - messages that conform to a contract between the process(es) inserting and the process(es) reading, structured with XML or your other choice of representation (JSON would be handy in some cases, for instance).
Then 0-to-n processes can be inserting, and 0-to-n processes can be reading and processing the messages, Each reading process typically handles a single message type. Multiple instances of a process type can be running for load-balancing.
The reader pulls one message and changes the state to "A"ctive while it works on it. When it's done it changes the state to "C"omplete. It can delete the message or not depending on whether you want to keep the audit trail. Messages of State = 'N' are pulled in MsgType/Timestamp order, so there's an index on MsgType + State + CreateTime.
Variations:
State for "E"rror.
Column for Reader process code.
Timestamps for state transitions.
This has provided a nice, scalable, visible, simple mechanism for doing a number of things like you are describing. If you have a basic understanding of databases, it's pretty foolproof and extensible.
Code from comments:
CREATE PROCEDURE GetMessage #MsgType VARCHAR(8) )
AS
DECLARE #MsgId INT
BEGIN TRAN
SELECT TOP 1 #MsgId = MsgId
FROM MsgQueue
WHERE MessageType = #pMessageType AND State = 'N'
ORDER BY CreateTime
IF #MsgId IS NOT NULL
BEGIN
UPDATE MsgQueue
SET State = 'A'
WHERE MsgId = #MsgId
SELECT MsgId, Msg
FROM MsgQueue
WHERE MsgId = #MsgId
END
ELSE
BEGIN
SELECT MsgId = NULL, Msg = NULL
END
COMMIT TRAN
Instead of having owner = null when it isn't owned, you should set it to a fake nobody record instead. Searching for null doesn't limit the index, you might end up with a table scan. (this is for oracle, SQL server might be different)
Just as a possible technology change, you might consider using MSMQ or something similar.
Each of your jobs / threads could query the messaging queue to see if a new job was available. Because the act of reading a message removes it from the stack, you are ensured that only one job / thread would get the message.
Of course, this is assuming you are working with a Microsoft platform.
See Vlad's answer for context, I'm just adding the equivalent in Oracle because there's a few "gotchas" to be aware of.
The
SELECT * FROM t order by x limit 2 FOR UPDATE OF t SKIP LOCKED
will not translate directly to Oracle in the way you might expect. If we look at a few options of translation, we might try any of the following:
SQL> create table t as
2 select rownum x
3 from dual
4 connect by level <= 100;
Table created.
SQL> declare
2 rc sys_refcursor;
3 begin
4 open rc for select * from t order by x for update skip locked fetch first 2 rows only;
5 end;
6 /
open rc for select * from t order by x for update skip locked fetch first 2 rows only;
*
ERROR at line 4:
ORA-06550: line 4, column 65:
PL/SQL: ORA-00933: SQL command not properly ended
ORA-06550: line 4, column 15:
PL/SQL: SQL Statement ignored
SQL> declare
2 rc sys_refcursor;
3 begin
4 open rc for select * from t order by x fetch first 2 rows only for update skip locked ;
5 end;
6 /
declare
*
ERROR at line 1:
ORA-02014: cannot select FOR UPDATE from view with DISTINCT, GROUP BY, etc.
ORA-06512: at line 4
or perhaps try falling back to the ROWNUM option
SQL> declare
2 rc sys_refcursor;
3 begin
4 open rc for select * from ( select * from t order by x ) where rownum <= 10 for update skip locked;
5 end;
6 /
declare
*
ERROR at line 1:
ORA-02014: cannot select FOR UPDATE from view with DISTINCT, GROUP BY, etc.
ORA-06512: at line 4
And you won't get any joy. You thus need to control the fetching of the "n" rows yourself. Thus you can code up something like:
SQL> declare
2 rc sys_refcursor;
3 res1 sys.odcinumberlist := sys.odcinumberlist();
4 begin
5 open rc for select * from t order by x for update skip locked;
6 fetch rc bulk collect into res1 limit 10;
7 end;
8 /
PL/SQL procedure successfully completed.
You are trying to implement de "Database as IPC" antipattern. Look it up to understand why you should consider redesigning your software properly.