PostgreSQL Locking Questions - sql

I am trying to figure out how to lock an entire table from writing in Postgres however it doesn't seem to be working so I am assuming I am doing something wrong.
Table name is 'users' for example.
LOCK TABLE users IN EXCLUSIVE MODE;
When I check the view pg_locks it doesn't seem to be in there. I've tried other locking modes as well to no avail.
Other transactions are also capable of performing the LOCK function and do not block like I assumed they would.
In the psql tool (8.1) I simply get back LOCK TABLE.
Any help would be wonderful.

There is no LOCK TABLE in the SQL standard, which instead uses SET TRANSACTION to specify concurrency levels on transactions. You should be able to use LOCK in transactions like this one
BEGIN WORK;
LOCK TABLE table_name IN ACCESS EXCLUSIVE MODE;
SELECT * FROM table_name WHERE id=10;
Update table_name SET field1=test WHERE id=10;
COMMIT WORK;
I actually tested this on my db.

Bear in mind that "lock table" only lasts until the end of a transaction. So it is ineffective unless you have already issued a "begin" in psql.
(in 9.0 this gives an error: "LOCK TABLE can only be used in transaction blocks". 8.1 is very old)

The lock is only active until the end of the current transaction and released when the transaction is committed (or rolled back).
Therefore, you have to embed the statement into a BEGIN and COMMIT/ROLLBACK block. After executing:
BEGIN;
LOCK TABLE users IN EXCLUSIVE MODE;
you could run the following query to see which locks are active on the users table at the moment:
SELECT * FROM pg_locks pl LEFT JOIN pg_stat_activity psa ON pl.pid = psa.pid WHERE relation = 'users'::regclass::oid;
The query should show the exclusive lock on the users table. After you perform a COMMIT and you re-run the above-mentioned query, the lock should not longer be present.
In addition, you could use a lock tracing tool like https://github.com/jnidzwetzki/pg-lock-tracer/ to get real-time insights into the locking activity of the PostgreSQL server. Using such lock tracing tools, you can see which locks are taken and released in real-time.

Related

ACID transactions across multiple technologies

I have an application that uses a local database and a remote database to synchronize to. The local database uses SQLite and for the remote database I'm using postgres. I need to move data from one database to the other database and avoid duplicating information.
Roughly what I do right now:
BEGIN; //remote database (start transaction)
SELECT * FROM local.queued TOP 1; //local database (select first queued element)
INSERT INTO remote.queued VALUES ( element ) //remote database (insert first queued element on remote database)
BEGIN; //local database (start transaction)
DELETE * FROM local.queued LIMIT 1; //local database (delete first queued element on local database)
END; //local database (finalize transaction local database)
END; //remote database (finalize transaction remote database)
This works relatively well most of the times but incidentally, after giving a hard reset to the program I've noticed a data record was duplicated. I believe this is has something to do with the transaction finalizing. Because I'm using two distinct technologies it would be impossible to create a single atomic commit with WAL archiving.
Any ideas how I could improve this concept to avoid duplicative entries.
The canonical way to do that is a distributed transaction using the two-phase commit protocol.
Unfortunately SQLite doesn't seem to support it, but since PostgreSQL does, you can still use it if only two databases are involved:
BEGIN; -- on PostgreSQL
BEGIN; -- on SQLite
/*
* Do work on both databases.
* On error, ROLLBACK both transactions.
*/
PREPARE TRANSACTION 'somename'; -- PostgreSQL
COMMIT; -- SQLite
COMMIT PREPARED 'somename'; -- PostgreSQL
Now if an error happens during the SQLite COMMIT, you run ROLLBACK PREPARED 'sonename' on PostgreSQL. The idea is that everything that can fail during commit is done during PREPARE TRANSACTION, and the state of the transaction is persisted so that it stays open, but will still survive a server restart.
This is safe, but there is a caveat. Prepared transactions are dangerous, because they will hold locks and keep VACUUM from cleaning up (like all other transactions), but they are persistent and stick around until you explicitly remove them. So you need some piece of software, a distributed transaction manager, that is crash safe and keeps track of all distributed transactions. This transaction manager can clean up all prepared transactions after some outage.
I think it would make sense to make your DML actions idempotent - that is to say that if you call them multiple times they have the same overall effect. For example, we can make the INSERT a no-op if the data exists:
INSERT INTO x(id, name)
SELECT nu.id, nu.name
FROM
(SELECT 1 as id, 'a' as name) as nu
LEFT JOIN x ON nu.id = x.id
WHERE
x.id IS NULL
You can run this as many times as you like, and it'll only insert one record
https://www.db-fiddle.com/f/nbHmy3PVDQ3RrGMqLni1su/0
YOu'll need to decide what to do if the record exists in an altered state - eg do you want to leave it alone, or reset it to the incoming values - a question for another time

SQL database locking difference between SELECT and UPDATE

I am re-writing an old stored procedure which is called by BizTalk. Now this has the potential to have 50-60 messages pushed through at once.
I occasionally have an issue with database locking when they are all trying to update at once.
I can only make changes in SQL (not BizTalk) and I am trying to find the best way to run my SP.
With this in mind what i have done is to make the majority of the statement to determine if an UPDATE is needed by using a SELECT statement.
What my question is - What is the difference regarding locking between an UPDATE statement and a SELECT with a NOLOCK against it?
I hope this makes sense - Thank you.
You use nolock when you want to read uncommitted data and want to avoid taking any shared lock on the data so that other transactions can take exclusive lock for updating/deleting.
You should not use nolock with update statement, it is really a bad idea, MS says that nolock are ignored for the target of update/insert statement.
Support for use of the READUNCOMMITTED and NOLOCK hints in the FROM
clause that apply to the target table of an UPDATE or DELETE statement
will be removed in a future version of SQL Server. Avoid using these
hints in this context in new development work, and plan to modify
applications that currently use them.
Source
Regarding your locking problem during multiple updates happening at the same time. This normally happens when you read data with the intention to update it later by just putting a shared lock, the following UPDATE statement can’t acquire the necessary Update Locks, because they are already blocked by the Shared Locks acquired in the different session causing the deadlock.
To resolve this you can select the records using UPDLOCK like following
DECLARE #IdToUpdate INT
SELECT #IdToUpdate =ID FROM [Your_Table] WITH (UPDLOCK) WHERE A=B
UPDATE [Your_Table]
SET X=Y
WHERE ID=#IdToUpdate
This will take the necessary Update lock on the record in advance and will stop other sessions to acquire any lock (shared/exclusive) on the record and will prevent from any deadlocks.
NOLOCK: Specifies that dirty reads are allowed. No shared locks are issued to prevent other transactions from modifying data read by the current transaction, and exclusive locks set by other transactions do not block the current transaction from reading the locked data. NOLOCK is equivalent to READUNCOMMITTED.
Thus, while using NOLOCK you get all rows back but there are chances to read Uncommitted (Dirty) data. And while using READPAST you get only Committed Data so there are chances you won’t get those records that are currently being processed and not committed.
For your better understanding please go through below link.
https://www.mssqltips.com/sqlservertip/2470/understanding-the-sql-server-nolock-hint/
https://www.mssqltips.com/sqlservertip/4468/compare-sql-server-nolock-and-readpast-table-hints/
https://social.technet.microsoft.com/wiki/contents/articles/19112.understanding-nolock-query-hint.aspx

Design a Lock for SQL Server to help relax the conflict between INSERT and SELECT

SQL Server is SQL Azure, basically it's SQL Server 2008 for normal process.
I have a table, called TASK, constantly have new data in (new task), and removed (task complete)
For new data in, I use INSERT INTO .. SELECT ..., most of time takes very long, lets say dozen of minutes.
For old data out, I first use SELECT (WITH NOLOCK) to get task, UPDATE to let other thread know this task already starts to process, then DELETE once finished.
Dead lock sometime happens on SELECT, most time happens on UPDATE and DELETE.
this is not time critical task, so I can start process the new data once all INSERT finished. Is there any kind of LOCK to ask SELECT not to select it before the INSERT finished? Or any kind of other suggestion to avoid Conflict. I can redesign table if needed.
later the sqlserver2005,resolve lock is easy.
for conflict
1.you can use the service broker.
2.use the isolution level.
dbcc useroptions ,at last row ,you can see the deflaut isolution level is read_committed,this is the session level.
we can change the level to read_committed_snapshot for conflict,in sqlserver, not realy row lock like oracle.but we can use this method implement.
ALTER DATABASE DBName
SET READ_COMMITTED_SNAPSHOT ON;
open this feature,must in single user schame.
and you can test it.
for session A ,session B.
A:update table1 set name = 'new' with(Xlock) where id = 1
B:you still update other row and select all the data from table.
my english is not very good,but for lock ,i know.
in sqlserver,for function ,there are three locks.
1.optimistic lock ,use the timestamp(rowversion) control.
2.pessimism lock ,force lock when use the date.use Ulock,Xlock and so on.
3.virtual lock,use the proc getapplock().
if you need lock schame in system architecture,please me email : mjjjj2001#163.com
Consider using service broker if this is a processing queue.
There are a number of considerations that affect performance and locking. I surmise that the data is being updated and deleted in a separate session. Which transaction isolation level is in use for the insert session and the delete session.
Has the insert session and all transactions committed and closed when the delete session runs? Are there multiple delete sessions running concurrently? It is very important to have an index on the columns you are using to identify a task for the SELECT/UPDATE/DELETE statements, especially if you move to a higher isolation level such as REPEATABLE READ or SERIALIZED.
All of these issues could be solved by moving to Service Broker if it is appropriate.

SELECT FOR UPDATE with SQL Server

I'm using a Microsoft SQL Server 2005 database with isolation level READ_COMMITTED and READ_COMMITTED_SNAPSHOT=ON.
Now I want to use:
SELECT * FROM <tablename> FOR UPDATE
...so that other database connections block when trying to access the same row "FOR UPDATE".
I tried:
SELECT * FROM <tablename> WITH (updlock) WHERE id=1
...but this blocks all other connections even for selecting an id other than "1".
Which is the correct hint to do a SELECT FOR UPDATE as known for Oracle, DB2, MySql?
EDIT 2009-10-03:
These are the statements to create the table and the index:
CREATE TABLE example ( Id BIGINT NOT NULL, TransactionId BIGINT,
Terminal BIGINT, Status SMALLINT );
ALTER TABLE example ADD CONSTRAINT index108 PRIMARY KEY ( Id )
CREATE INDEX I108_FkTerminal ON example ( Terminal )
CREATE INDEX I108_Key ON example ( TransactionId )
A lot of parallel processes do this SELECT:
SELECT * FROM example o WITH (updlock) WHERE o.TransactionId = ?
EDIT 2009-10-05:
For a better overview I've written down all tried solutions in the following table:
mechanism | SELECT on different row blocks | SELECT on same row blocks
-----------------------+--------------------------------+--------------------------
ROWLOCK | no | no
updlock, rowlock | yes | yes
xlock,rowlock | yes | yes
repeatableread | no | no
DBCC TRACEON (1211,-1) | yes | yes
rowlock,xlock,holdlock | yes | yes
updlock,holdlock | yes | yes
UPDLOCK,READPAST | no | no
I'm looking for | no | yes
Recently I had a deadlock problem because Sql Server locks more then necessary (page). You can't really do anything against it. Now we are catching deadlock exceptions... and I wish I had Oracle instead.
Edit:
We are using snapshot isolation meanwhile, which solves many, but not all of the problems. Unfortunately, to be able to use snapshot isolation it must be allowed by the database server, which may cause unnecessary problems at customers site. Now we are not only catching deadlock exceptions (which still can occur, of course) but also snapshot concurrency problems to repeat transactions from background processes (which cannot be repeated by the user). But this still performs much better than before.
I have a similar problem, I want to lock only 1 row.
As far as I know, with UPDLOCK option, SQLSERVER locks all the rows that it needs to read in order to get the row. So, if you don't define a index to direct access to the row, all the preceded rows will be locked.
In your example:
Asume that you have a table named TBL with an id field.
You want to lock the row with id=10.
You need to define a index for the field id (or any other fields that are involved in you select):
CREATE INDEX TBLINDEX ON TBL ( id )
And then, your query to lock ONLY the rows that you read is:
SELECT * FROM TBL WITH (UPDLOCK, INDEX(TBLINDEX)) WHERE id=10.
If you don't use the INDEX(TBLINDEX) option, SQLSERVER needs to read all rows from the beginning of the table to find your row with id=10, so those rows will be locked.
You cannot have snapshot isolation and blocking reads at the same time. The purpose of snapshot isolation is to prevent blocking reads.
perhaps making mvcc permanent could solve it (as opposed to specific batch only: SET TRANSACTION ISOLATION LEVEL SNAPSHOT):
ALTER DATABASE yourDbNameHere SET READ_COMMITTED_SNAPSHOT ON;
[EDIT: October 14]
After reading this: Better concurrency in Oracle than SQL Server? and this: http://msdn.microsoft.com/en-us/library/ms175095.aspx
When the READ_COMMITTED_SNAPSHOT
database option is set ON, the
mechanisms used to support the option
are activated immediately. When
setting the READ_COMMITTED_SNAPSHOT
option, only the connection executing
the ALTER DATABASE command is allowed
in the database. There must be no
other open connection in the database
until ALTER DATABASE is complete. The
database does not have to be in
single-user mode.
i've come to conclusion that you need to set two flags in order to activate mssql's MVCC permanently on a given database:
ALTER DATABASE yourDbNameHere SET ALLOW_SNAPSHOT_ISOLATION ON;
ALTER DATABASE yourDbNameHere SET READ_COMMITTED_SNAPSHOT ON;
Try (updlock, rowlock)
The full answer could delve into the internals of the DBMS. It depends on how the query engine (which executes the query plan generated by the SQL optimizer) operates.
However, one possible explanation (applicable to at least some versions of some DBMS - not necessarily to MS SQL Server) is that there is no index on the ID column, so any process trying to work a query with 'WHERE id = ?' in it ends up doing a sequential scan of the table, and that sequential scan hits the lock which your process applied. You can also run into problems if the DBMS applies page-level locking by default; locking one row locks the entire page and all the rows on that page.
There are some ways you could debunk this as the source of trouble. Look at the query plan; study the indexes; try your SELECT with ID of 1000000 instead of 1 and see whether other processes are still blocked.
OK, a single select wil by default use "Read Committed" transaction isolation which locks and therefore stops writes to that set. You can change the transaction isolation level with
Set Transaction Isolation Level { Read Uncommitted | Read Committed | Repeatable Read | Serializable }
Begin Tran
Select ...
Commit Tran
These are explained in detail in SQL Server BOL
Your next problem is that by default SQL Server 2K5 will escalate the locks if you have more than ~2500 locks or use more than 40% of 'normal' memory in the lock transaction. The escalation goes to page, then table lock
You can switch this escalation off by setting "trace flag" 1211t, see BOL for more information
Create a fake update to enforce the rowlock.
UPDATE <tablename> (ROWLOCK) SET <somecolumn> = <somecolumn> WHERE id=1
If that's not locking your row, god knows what will.
After this "UPDATE" you can do your SELECT (ROWLOCK) and subsequent updates.
I'm assuming you don't want any other session to be able to read the row while this specific query is running...
Wrapping your SELECT in a transaction while using WITH (XLOCK,READPAST) locking hint will get the results you want. Just make sure those other concurrent reads are NOT using WITH (NOLOCK). READPAST allows other sessions to perform the same SELECT but on other rows.
BEGIN TRAN
SELECT *
FROM <tablename> WITH (XLOCK,READPAST)
WHERE RowId = #SomeId
-- Do SOMETHING
UPDATE <tablename>
SET <column>=#somevalue
WHERE RowId=#SomeId
COMMIT
Question - is this case proven to be the result of lock escalation (i.e. if you trace with profiler for lock escalation events, is that definitely what is happening to cause the blocking)? If so, there is a full explanation and a (rather extreme) workaround by enabling a trace flag at the instance level to prevent lock escalation. See http://support.microsoft.com/kb/323630 trace flag 1211
But, that will likely have unintended side effects.
If you are deliberately locking a row and keeping it locked for an extended period, then using the internal locking mechanism for transactions isn't the best method (in SQL Server at least). All the optimization in SQL Server is geared toward short transactions - get in, make an update, get out. That's the reason for lock escalation in the first place.
So if the intent is to "check out" a row for a prolonged period, instead of transactional locking it's best to use a column with values and a plain ol' update statement to flag the rows as locked or not.
Application locks are one way to roll your own locking with custom granularity while avoiding "helpful" lock escalation. See sp_getapplock.
Try using:
SELECT * FROM <tablename> WITH ROWLOCK XLOCK HOLDLOCK
This should make the lock exclusive and hold it for the duration of the transaction.
According to this article, the solution is to use the WITH(REPEATABLEREAD) hint.
Revisit all your queries, maybe you have some query that select without ROWLOCK/FOR UPDATE hint from the same table you have SELECT FOR UPDATE.
MSSQL often escalates those row locks to page-level locks (even table-level locks, if you don't have index on field you are querying), see this explanation. Since you ask for FOR UPDATE, i could assume that you need transacion-level(e.g. financial, inventory, etc) robustness. So the advice on that site is not applicable to your problem. It's just an insight why MSSQL escalates locks.
If you are already using MSSQL 2005(and up), they are MVCC-based, i think you should have no problem with row-level lock using ROWLOCK/UPDLOCK hint. But if you are already using MSSQL 2005 and up, try to check some of your queries which query the same table you want to FOR UPDATE if they escalate locks by checking the fields on their WHERE clause if they have index.
P.S.
I'm using PostgreSQL, it also uses MVCC have FOR UPDATE, i don't encounter same problem. Lock escalations is what MVCC solves, so i would be surprised if MSSQL 2005 still escalate locks on table with WHERE clauses that doesn't have index on its fields. If that(lock escalation) is still the case for MSSQL 2005, try to check the fields on WHERE clauses if they have index.
Disclaimer: my last use of MSSQL is version 2000 only.
You have to deal with the exception at commit time and repeat the transaction.
I solved the rowlock problem in a completely different way. I realized that sql server was not able to manage such a lock in a satisfying way. I choosed to solve this from a programatically point of view by the use of a mutex... waitForLock... releaseLock...
Have you tried READPAST?
I've used UPDLOCK and READPAST together when treating a table like a queue.
How about trying to do a simple update on this row first (without really changing any data)? After that you can proceed with the row like in was selected for update.
UPDATE dbo.Customer SET FieldForLock = FieldForLock WHERE CustomerID = #CustomerID
/* do whatever you want */
Edit: you should wrap it in a transaction of course
Edit 2: another solution is to use SERIALIZABLE isolation level

Sybase ASE: "Your server command encountered a deadlock situation"

When running a stored procedure (from a .NET application) that does an INSERT and an UPDATE, I sometimes (but not that often, really) and randomly get this error:
ERROR [40001] [DataDirect][ODBC Sybase Wire Protocol driver][SQL Server]Your server command (family id #0, process id #46) encountered a deadlock situation. Please re-run your command.
How can I fix this?
Thanks.
Your best bet for solving you deadlocking issue is to set "print deadlock information" to on using
sp_configure "print deadlock information", 1
Everytime there is a deadlock this will print information about what processes were involved and what sql they were running at the time of the dead lock.
If your tables are using allpages locking. It can reduce deadlocks to switch to datarows or datapages locking. If you do this make sure to gather new stats on the tables and recreate indexes, views, stored procedures and triggers that access the tables that are changed. If you don't you will either get errors or not see the full benefits of the change depending on which ones are not recreated.
I have a set of long term apps which occasionally over lap table access and sybase will throw this error. If you check the sybase server log it will give you the complete info on why it happened. Like: The sql that was involved the two processes trying to get a lock. Usually one trying to read and the other doing something like a delete. In my case the apps are running in separate JVMs, so can't sychronize just have to clean up periodically.
Assuming that your tables are properly indexed (and that you are actually using those indexes - always worth checking via the query plan) you could try breaking the component parts of the SP down and wrapping them in separate transactions so that each unit of work is completed before the next one starts.
begin transaction
update mytable1
set mycolumn = "test"
where ID=1
commit transaction
go
begin transaction
insert into mytable2 (mycolumn) select mycolumn from mytable1 where ID = 1
commit transaction
go