UPDATE does not lock table if no rows match, race condition - sql

I want to increment and return a counter from a database table.
The java code is as follows:
String sqlUpdate = "UPDATE mytable SET col3 = col3 + 1 WHERE colpk1 = ? AND colpk2 = ?";
Query queryUpdate = manager.createNativeQuery(sqlUpdate);
queryUpdate.setParameter(1, ...);
queryUpdate.setParameter(2, ...);
int num = queryUpdate.executeUpdate();
if (num == 0) {
long count = 1;
String sqlInsert = "INSERT INTO mytable (colpk1, colpk2, col3) VALUES (?,?,?)";
Query queryInsert = manager.createNativeQuery(sqlInsert);
queryInsert.setParameter(1, ...);
queryInsert.setParameter(2, ...);
queryInsert.setParameter(3, count);
queryInsert.executeUpdate();
return count;
} else {
String sqlSelect = "SELECT col3 FROM mytable WHERE colpk1 = ? AND colpk2 = ?";
Query querySelect = manager.createNativeQuery(sqlSelect);
querySelect.setParameter(1, ...);
querySelect.setParameter(2, ...);
Object result = querySelect.getSingleResult();
return Long.parseLong(result.toString());
}
This works well also concurrently used (creates a lock) in case there is already a row with the given primary key. However, in case that row does not exist yet (num == 0), the UPDATE does not lock, and a concurrent access can happen in between the two queries, then leading to a Unique Constraint validation when executing the INSERT as the new row was already created in the meantime.
What's the best way to solve this problem? Would it be better to use a SELECT FOR UPDATE first and then depending on the result doing an UPDATE or INSERT?

The MERGE statement will avoid the split statements.
http://en.wikipedia.org/wiki/Merge_(SQL)
Alternatively, you could always trap the Unique constraint exception for the rare cases when the condition occurs, and retry.

As Merge can throw the Unique Constraint exception in concurrent execution, the best solution was to catch the exception when executing the insert, then the row must be there already, and continue with the update then.
Getting this transaction to commit in case of container managed transactions was the next problem, as the exception lead to isRollBackOnly == true. The way that worked was to use a new bean call for trying the insert within a new transaction, see Commit transaction after exception - undo setRollbackOnly

Related

Atomic optimistic locking on a PostgreSQL table doesn't fail if clause not satifisfied

Let's say I have the table definition below:
CREATE TABLE dummy_table (
id SERIAL PRIMARY KEY ,
version INT NOT NULL,
data TEXT NOT NULL);
INSERT INTO dummy_table(version, data)
VALUES (1, 'Stuff');
UPDATE dummy_table
SET version = 2
WHERE id = 1
AND version = 1;
Which basically gives a table in the state below:
id version data
1 2 'Stuff'
Now if several optimistic locking update statements are received by the database engine, how to make sure that they fail if the version is not the current one, for instance:
UPDATE dummy_table
SET version = 1
WHERE id = 1
AND version = 1;
If the conditions in the clause cannot be satisfied, the update will not happen.
The problem is that, there is actually no error given as feedback when executing that statement.
I tried the solutions available here but I'm not sure the solutions given are actually atomic:
UPDATE dummy_table
SET version = 1
WHERE id = 1
AND version = 1
RETURNING id;
Does not return anything and does not throw any exception if the clause is not satisfied.
DO $$
BEGIN
UPDATE dummy_table
SET version = 1
WHERE id = 1;
IF NOT FOUND THEN RAISE EXCEPTION 'Record not found.';
END IF;
END $$;
Works but not sure it's actually atomic.
Is there any solution that would make an actual (atomic) optimistic locking update fails if the condition in the UPDATE statement cannot be satisfied?
Both solutions are fine and not subject to a race condition.
If the UPDATE changes no rows, RETURNING will return an empty result set, and FOUND will be set to FALSE.

Use trigger after multiple insert to update log table

I have the following trigger:
ALTER TRIGGER [Staging].[tr_UriData_ForInsert]
ON [Staging].[UriData]
FOR INSERT
AS
BEGIN
DECLARE #_Serial NVARCHAR(50)
DECLARE #_Count AS INT
IF ##ROWCOUNT = 0
RETURN
SET NOCOUNT ON;
IF EXISTS(SELECT * FROM inserted)
BEGIN
SELECT #_Count = COUNT(Id) FROM inserted
SELECT #_Serial = SerialNumber FROM inserted
INSERT INTO [Staging].[DataLog]
VALUES (CURRENT_TIMESTAMP, #_Serial + ': Data Insert --> Rows inserted: ' + #_Count, 'New data has been received')
END
END
The table receives multiple rows at once. I want to be able to add one row in the log table to tell me the insert has happened.
It works great with one row being inserted, but with multiple rows, the trigger doesn't fire. I have read other items on here and it is quite clear that you shouldn't use ROW_NUMBER().
In summary: I want to update my log table when a multiple row insert happens in another table called UriData.
The data is inserted from C# using the following:
using (var sqlBulk = new SqlBulkCopy(conn, SqlBulkCopyOptions.Default, transaction))
{
sqlBulk.DestinationTableName = tableName;
try
{
sqlBulk.WriteToServer(dt);
}
catch(SqlException sqlEx)
{
transaction.Rollback();
var msg = sqlEx.Message;
return false;
}
finally {
transaction.Commit();
conn.Close();
}
}
I don't want to know what is being inserted, but when it has happened, so I can run a set of SPROCS to clean and pivot the data.
TIA
The problem is your trigger assumes that only one row will be updated. A scalar variable can only have 1 value. So, for example, the statement SELECT #_Serial = SerialNumber FROM inserted will set #_Serial with the last value returned from the object inserted.
Treat your data as what it is, a dataset. This is untested, however, I suspect this gives you the result you want:
ALTER TRIGGER [Staging].[tr_UriData_ForInsert]
ON [Staging].[UriData]
FOR INSERT
AS
BEGIN
--No need for a ROWCOUNT. If there are no rows, then nothing was inserted, and this trigger won't happen.
INSERT INTO [Staging].[DataLog] ({COLUMNS LIST})
SELECT CURRENT_TIMESTAMP,
SerialNumber + ': Data Insert --> Rows inserted: ' +
CONVERT(varchar(10),COUNT(SerialNumber) OVER (PARTITION BY SerialNumber)), --COUNT returns an INT, so this statement would have failed with a conversion error too
'New data has been received'
FROM inserted;
END
Please note my comments or sections in braces ({}).
Edit: Sean, who has since deleted his answer, used GROUP BY. I copied what exact method you had, however, GROUP BY might well be the clause you want, rather than OVER.
So after a lot of digging and arguing, my hosting company told me that they have disabled bulk inserts of any kind, without bothering to notify their customers.

Rebuilt table and added Identity in an upgrade tool, but the script isn't repeatable

I've been tasked with fixing an SQL script that takes a nullable un-indexed ID column and makes sure every row has an ID. Then it creates a duplicate table, with with the ID column an an Identity & PK and uses a SWITCH TO command to move the data over before dropping the old one and renaming the new one. At the moment, the nulls are replaced using the while loop below, but when the table's already been updated it throws the following error:
Cannot update identity column 'myID'
My assumption is that it's not even trying to go into the loop, but SQL Server's recognised that there's an update on what's now an identity field and has thrown a hissy-fit. This is part of a batch of upgrade scripts, so will be run regularly, but obviously we want to avoid this error being thrown.
Once the column becomes an identity column we won't need to change the values. Can the error be suppressed, or are there other solutions I should consider?
WHILE EXISTS(SELECT * FROM myTable WHERE myID IS null)
BEGIN
UPDATE myTable
SET myID=(SELECT MAX(myID)+1 FROM myTable)
FROM (SELECT TOP 1 * FROM myTable WHERE myID IS NULL) AS n
WHERE n.myVarChar = myTable.myVarChar -- This is unique, but we don't use text fields as IDs
END
GO
Thanks!
You should add check:
IF columnproperty(object_id('mytable'),'myId','IsIdentity') = 0
BEGIN
WHILE EXISTS(SELECT * FROM myTable WHERE myID IS null)
BEGIN
UPDATE myTable
SET myID=(SELECT MAX(myID)+1 FROM myTable)
FROM (SELECT TOP 1 * FROM myTable WHERE myID IS NULL) AS n
WHERE n.myVarChar = myTable.myVarChar;
END
END
If column myID has identity property you could skip execution of your code.

select and delete record in transaction sql

I am using oracle as my database server. I have a simple sql table which stores codes for each member. I want to remove code from the table but also get the value of it.
SQL> describe member_code_store;
Name Null? Type
----------------------------------------- -------- ----------------------------
MEMBER NOT NULL NUMBER(38)
CODE NOT NULL VARCHAR2(30)
So I want to run below queries in a transaction
PreparedStatement pstmt = null;
ResultSet rs = null;
String query =
"SELECT code FROM member_code_store where coupon=? AND rownum=1";
Connection connection = DBConnection.getConnection();
pstmt = connection.prepareStatement(query);
pstmt.setString(1, String.valueOf(3));
rs = pstmt.executeQuery();
rs.next();
String code = rs.getString(1);
delete the code now
String query =
"delete from member_code_store where coupon =? AND code=?;";
Connection connection = DBConnection.getConnection();
pstmt = connection.prepareStatement(query);
pstmt.setString(1, String.valueOf(3));
pstmt.setString(2, code);
rs = pstmt.executeUpdate();
Problem with the above code is that multiple workers removing the codes will get the same code. How do I enclose the transaction so that I just lock the record instead of locking the whole table.
Or should I use procedures or packages which are more efficient?
Essentially you should use row lock. The example I show includes the nowait option, which wil return an error if you try to select the row that is lock and your code will have to handle that.
select code, rowid
from member_code_store
where coupon=? AND rownum=1
for update of code nowait
Save the rowid so that you have a variable to supply to the delete statement
delete from member_code_store
where rowid = :row_id

Update if a key, or combination of keys, exists, otherwise INSERT [duplicate]

Assume a table structure of MyTable(KEY, datafield1, datafield2...).
Often I want to either update an existing record, or insert a new record if it doesn't exist.
Essentially:
IF (key exists)
run update command
ELSE
run insert command
What's the best performing way to write this?
don't forget about transactions. Performance is good, but simple (IF EXISTS..) approach is very dangerous.
When multiple threads will try to perform Insert-or-update you can easily
get primary key violation.
Solutions provided by #Beau Crawford & #Esteban show general idea but error-prone.
To avoid deadlocks and PK violations you can use something like this:
begin tran
if exists (select * from table with (updlock,serializable) where key = #key)
begin
update table set ...
where key = #key
end
else
begin
insert into table (key, ...)
values (#key, ...)
end
commit tran
or
begin tran
update table with (serializable) set ...
where key = #key
if ##rowcount = 0
begin
insert into table (key, ...) values (#key,..)
end
commit tran
See my detailed answer to a very similar previous question
#Beau Crawford's is a good way in SQL 2005 and below, though if you're granting rep it should go to the first guy to SO it. The only problem is that for inserts it's still two IO operations.
MS Sql2008 introduces merge from the SQL:2003 standard:
merge tablename with(HOLDLOCK) as target
using (values ('new value', 'different value'))
as source (field1, field2)
on target.idfield = 7
when matched then
update
set field1 = source.field1,
field2 = source.field2,
...
when not matched then
insert ( idfield, field1, field2, ... )
values ( 7, source.field1, source.field2, ... )
Now it's really just one IO operation, but awful code :-(
Do an UPSERT:
UPDATE MyTable SET FieldA=#FieldA WHERE Key=#Key
IF ##ROWCOUNT = 0
INSERT INTO MyTable (FieldA) VALUES (#FieldA)
http://en.wikipedia.org/wiki/Upsert
Many people will suggest you use MERGE, but I caution you against it. By default, it doesn't protect you from concurrency and race conditions any more than multiple statements, and it introduces other dangers:
Use Caution with SQL Server's MERGE Statement
So, you want to use MERGE, eh?
Even with this "simpler" syntax available, I still prefer this approach (error handling omitted for brevity):
BEGIN TRANSACTION;
UPDATE dbo.table WITH (UPDLOCK, SERIALIZABLE)
SET ... WHERE PK = #PK;
IF ##ROWCOUNT = 0
BEGIN
INSERT dbo.table(PK, ...) SELECT #PK, ...;
END
COMMIT TRANSACTION;
Please stop using this UPSERT anti-pattern
A lot of folks will suggest this way:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
BEGIN TRANSACTION;
IF EXISTS (SELECT 1 FROM dbo.table WHERE PK = #PK)
BEGIN
UPDATE ...
END
ELSE
BEGIN
INSERT ...
END
COMMIT TRANSACTION;
But all this accomplishes is ensuring you may need to read the table twice to locate the row(s) to be updated. In the first sample, you will only ever need to locate the row(s) once. (In both cases, if no rows are found from the initial read, an insert occurs.)
Others will suggest this way:
BEGIN TRY
INSERT ...
END TRY
BEGIN CATCH
IF ERROR_NUMBER() = 2627
UPDATE ...
END CATCH
However, this is problematic if for no other reason than letting SQL Server catch exceptions that you could have prevented in the first place is much more expensive, except in the rare scenario where almost every insert fails. I prove as much here:
Checking for potential constraint violations before entering TRY/CATCH
Performance impact of different error handling techniques
IF EXISTS (SELECT * FROM [Table] WHERE ID = rowID)
UPDATE [Table] SET propertyOne = propOne, property2 . . .
ELSE
INSERT INTO [Table] (propOne, propTwo . . .)
Edit:
Alas, even to my own detriment, I must admit the solutions that do this without a select seem to be better since they accomplish the task with one less step.
If you want to UPSERT more than one record at a time you can use the ANSI SQL:2003 DML statement MERGE.
MERGE INTO table_name WITH (HOLDLOCK) USING table_name ON (condition)
WHEN MATCHED THEN UPDATE SET column1 = value1 [, column2 = value2 ...]
WHEN NOT MATCHED THEN INSERT (column1 [, column2 ...]) VALUES (value1 [, value2 ...])
Check out Mimicking MERGE Statement in SQL Server 2005.
Although its pretty late to comment on this I want to add a more complete example using MERGE.
Such Insert+Update statements are usually called "Upsert" statements and can be implemented using MERGE in SQL Server.
A very good example is given here:
http://weblogs.sqlteam.com/dang/archive/2009/01/31/UPSERT-Race-Condition-With-MERGE.aspx
The above explains locking and concurrency scenarios as well.
I will be quoting the same for reference:
ALTER PROCEDURE dbo.Merge_Foo2
#ID int
AS
SET NOCOUNT, XACT_ABORT ON;
MERGE dbo.Foo2 WITH (HOLDLOCK) AS f
USING (SELECT #ID AS ID) AS new_foo
ON f.ID = new_foo.ID
WHEN MATCHED THEN
UPDATE
SET f.UpdateSpid = ##SPID,
UpdateTime = SYSDATETIME()
WHEN NOT MATCHED THEN
INSERT
(
ID,
InsertSpid,
InsertTime
)
VALUES
(
new_foo.ID,
##SPID,
SYSDATETIME()
);
RETURN ##ERROR;
/*
CREATE TABLE ApplicationsDesSocietes (
id INT IDENTITY(0,1) NOT NULL,
applicationId INT NOT NULL,
societeId INT NOT NULL,
suppression BIT NULL,
CONSTRAINT PK_APPLICATIONSDESSOCIETES PRIMARY KEY (id)
)
GO
--*/
DECLARE #applicationId INT = 81, #societeId INT = 43, #suppression BIT = 0
MERGE dbo.ApplicationsDesSocietes WITH (HOLDLOCK) AS target
--set the SOURCE table one row
USING (VALUES (#applicationId, #societeId, #suppression))
AS source (applicationId, societeId, suppression)
--here goes the ON join condition
ON target.applicationId = source.applicationId and target.societeId = source.societeId
WHEN MATCHED THEN
UPDATE
--place your list of SET here
SET target.suppression = source.suppression
WHEN NOT MATCHED THEN
--insert a new line with the SOURCE table one row
INSERT (applicationId, societeId, suppression)
VALUES (source.applicationId, source.societeId, source.suppression);
GO
Replace table and field names by whatever you need.
Take care of the using ON condition.
Then set the appropriate value (and type) for the variables on the DECLARE line.
Cheers.
That depends on the usage pattern. One has to look at the usage big picture without getting lost in the details. For example, if the usage pattern is 99% updates after the record has been created, then the 'UPSERT' is the best solution.
After the first insert (hit), it will be all single statement updates, no ifs or buts. The 'where' condition on the insert is necessary otherwise it will insert duplicates, and you don't want to deal with locking.
UPDATE <tableName> SET <field>=#field WHERE key=#key;
IF ##ROWCOUNT = 0
BEGIN
INSERT INTO <tableName> (field)
SELECT #field
WHERE NOT EXISTS (select * from tableName where key = #key);
END
You can use MERGE Statement, This statement is used to insert data if not exist or update if does exist.
MERGE INTO Employee AS e
using EmployeeUpdate AS eu
ON e.EmployeeID = eu.EmployeeID`
If going the UPDATE if-no-rows-updated then INSERT route, consider doing the INSERT first to prevent a race condition (assuming no intervening DELETE)
INSERT INTO MyTable (Key, FieldA)
SELECT #Key, #FieldA
WHERE NOT EXISTS
(
SELECT *
FROM MyTable
WHERE Key = #Key
)
IF ##ROWCOUNT = 0
BEGIN
UPDATE MyTable
SET FieldA=#FieldA
WHERE Key=#Key
IF ##ROWCOUNT = 0
... record was deleted, consider looping to re-run the INSERT, or RAISERROR ...
END
Apart from avoiding a race condition, if in most cases the record will already exist then this will cause the INSERT to fail, wasting CPU.
Using MERGE probably preferable for SQL2008 onwards.
MS SQL Server 2008 introduces the MERGE statement, which I believe is part of the SQL:2003 standard. As many have shown it is not a big deal to handle one row cases, but when dealing with large datasets, one needs a cursor, with all the performance problems that come along. The MERGE statement will be much welcomed addition when dealing with large datasets.
Before everyone jumps to HOLDLOCK-s out of fear from these nafarious users running your sprocs directly :-) let me point out that you have to guarantee uniqueness of new PK-s by design (identity keys, sequence generators in Oracle, unique indexes for external ID-s, queries covered by indexes). That's the alpha and omega of the issue. If you don't have that, no HOLDLOCK-s of the universe are going to save you and if you do have that then you don't need anything beyond UPDLOCK on the first select (or to use update first).
Sprocs normally run under very controlled conditions and with the assumption of a trusted caller (mid tier). Meaning that if a simple upsert pattern (update+insert or merge) ever sees duplicate PK that means a bug in your mid-tier or table design and it's good that SQL will yell a fault in such case and reject the record. Placing a HOLDLOCK in this case equals eating exceptions and taking in potentially faulty data, besides reducing your perf.
Having said that, Using MERGE, or UPDATE then INSERT is easier on your server and less error prone since you don't have to remember to add (UPDLOCK) to first select. Also, if you are doing inserts/updates in small batches you need to know your data in order to decide whether a transaction is appropriate or not. It it's just a collection of unrelated records then additional "enveloping" transaction will be detrimental.
Does the race conditions really matter if you first try an update followed by an insert?
Lets say you have two threads that want to set a value for key key:
Thread 1: value = 1
Thread 2: value = 2
Example race condition scenario
key is not defined
Thread 1 fails with update
Thread 2 fails with update
Exactly one of thread 1 or thread 2 succeeds with insert. E.g. thread 1
The other thread fails with insert (with error duplicate key) - thread 2.
Result: The "first" of the two treads to insert, decides value.
Wanted result: The last of the 2 threads to write data (update or insert) should decide value
But; in a multithreaded environment, the OS scheduler decides on the order of the thread execution - in the above scenario, where we have this race condition, it was the OS that decided on the sequence of execution. Ie: It is wrong to say that "thread 1" or "thread 2" was "first" from a system viewpoint.
When the time of execution is so close for thread 1 and thread 2, the outcome of the race condition doesn't matter. The only requirement should be that one of the threads should define the resulting value.
For the implementation: If update followed by insert results in error "duplicate key", this should be treated as success.
Also, one should of course never assume that value in the database is the same as the value you wrote last.
I had tried below solution and it works for me, when concurrent request for insert statement occurs.
begin tran
if exists (select * from table with (updlock,serializable) where key = #key)
begin
update table set ...
where key = #key
end
else
begin
insert table (key, ...)
values (#key, ...)
end
commit tran
You can use this query. Work in all SQL Server editions. It's simple, and clear. But you need use 2 queries. You can use if you can't use MERGE
BEGIN TRAN
UPDATE table
SET Id = #ID, Description = #Description
WHERE Id = #Id
INSERT INTO table(Id, Description)
SELECT #Id, #Description
WHERE NOT EXISTS (SELECT NULL FROM table WHERE Id = #Id)
COMMIT TRAN
NOTE: Please explain answer negatives
Assuming that you want to insert/update single row, most optimal approach is to use SQL Server's REPEATABLE READ transaction isolation level:
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
BEGIN TRANSACTION
IF (EXISTS (SELECT * FROM myTable WHERE key=#key)
UPDATE myTable SET ...
WHERE key=#key
ELSE
INSERT INTO myTable (key, ...)
VALUES (#key, ...)
COMMIT TRANSACTION
This isolation level will prevent/block subsequent repeatable read transactions from accessing same row (WHERE key=#key) while currently running transaction is open.
On the other hand, operations on another row won't be blocked (WHERE key=#key2).
You can use:
INSERT INTO tableName (...) VALUES (...)
ON DUPLICATE KEY
UPDATE ...
Using this, if there is already an entry for the particular key, then it will UPDATE, else, it will INSERT.
In SQL Server 2008 you can use the MERGE statement
If you use ADO.NET, the DataAdapter handles this.
If you want to handle it yourself, this is the way:
Make sure there is a primary key constraint on your key column.
Then you:
Do the update
If the update fails because a record with the key already exists, do the insert. If the update does not fail, you are finished.
You can also do it the other way round, i.e. do the insert first, and do the update if the insert fails. Normally the first way is better, because updates are done more often than inserts.
Doing an if exists ... else ... involves doing two requests minimum (one to check, one to take action). The following approach requires only one where the record exists, two if an insert is required:
DECLARE #RowExists bit
SET #RowExists = 0
UPDATE MyTable SET DataField1 = 'xxx', #RowExists = 1 WHERE Key = 123
IF #RowExists = 0
INSERT INTO MyTable (Key, DataField1) VALUES (123, 'xxx')
I usually do what several of the other posters have said with regard to checking for it existing first and then doing whatever the correct path is. One thing you should remember when doing this is that the execution plan cached by sql could be nonoptimal for one path or the other. I believe the best way to do this is to call two different stored procedures.
FirstSP:
If Exists
Call SecondSP (UpdateProc)
Else
Call ThirdSP (InsertProc)
Now, I don't follow my own advice very often, so take it with a grain of salt.
Do a select, if you get a result, update it, if not, create it.