SQL Query Execution more than once? - sql

I have 2 tables (A,B), and 1 query
My query is something like this
Read From A
Update B with this data from A
Using the updated table B, set final value of A.
Example execution can be find in below question:
Proper way to keep a single data in sql server?
Now since all the process is connected, this query should not be executed twice at the same time, or by 2 different users until the process ends. How do I prevent this ? Or does it already work securely like this?

Use transaction lock :
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
GO
BEGIN TRANSACTION
--select * from A
-- update B ....
--update A
WAITFOR DELAY '00:00:02' -- tables remain locked for 2 secs hh:mm:ss
commit TRANSACTION
during the transaction execution, any try to read or write from/to tables will timeout...

EDIT :
u must use, some lock to lock the db while updating. http://msdn.microsoft.com/en-us/library/ms173763.aspx
psedo code for u:
int x=(select val from tableB)+1
query="update tableB set tableB.field="+x+"where......."
if query executed successfully:
update tableA

I hope that your table A and B must be having some Primary Key eg EmployeeID. In such case a simple solution is to create a table (say Lock_Table) which keeps a record of the EmployeeID beign modified.
So here you would need to go like this:
BEGIN TRANSACTION
1- Read EmployeeID From A
2- Check if EmployeeID already exists in Lock_Table. If Yes then Quit Else insert that EmployeeID in Lock_Table
3- Update B with this data(EmployeeID in this case) from A
4- Using the updated table B, set final value of A.
5- Delete this EmployeeID from the Lock_Table
COMMIT TRANSACTION
On any error ROLLBACK the Transaction.
Hope it helps.

Related

Oracle SQL update double-check locking

Suppose we have table A with fields time: date, status: int, playerId: int, serverid: int
We added constraint on time, playerid and serverid (UNQ_TIME_PLAYERID_SERVERID)
At some time we try to update all rows in table A with new status and date:
update status = 1, time = sysdate where serverid=XXX and status != 1 and time > sysdate
Problem that there are two separated processes on separate machines that can execute same update at same sysdate.
And UNQ_TIME_PLAYERID_SERVERID violation occurs!
Is there any possibility to force Oracle check where cause before concrete update (when lock on row acquired)?
I do not want to use any 'select for update' things
If it's really the same update 100% of the time, then just catch the exception and ignore it.
In case you want to prevent an error occuring in the first place, you need to implement some logic to prevent the second update statement from ever executing.
I could think of a "lock table" just for this purpose. Create a table TABLE_A_LOCK_TB (add columns based on what information you want to have stored there for administrative reasons, e.g. user who set the lock or a timestamp, ...).
Before you execute an update statement on table A, just insert a row to TABLE_A_LOCK_TB. Once an update was successful, delete said row.
Before executing any update statement on table A just check whether the TABLE_A_LOCK_TB has a dataset. If it doesn't your update is good to go, if it does you don't execute the update.
To make this process easier you could just write a package for "locking" and "unlocking" table A by inserting / deleting a row from the TABLE_A_LOCK_TB. Also implement a function to check the "lock status".
If you need this logic for several tables you can also make it dynamic by just having a column holding the table name in TABLE_A_LOCK_TB and checking against that.
In your application logic you can handle every update like this then (pseudocode):
IF your_lock_package.lock_status(table_name) = false THEN
your_lock_package.set_lock(table_name);
-- update statement(s)
your_lock_package.release_lock(table_name);
ELSE
-- "error" handling / information to user + exit

SQL Server : make update trigger don't activate with no changing value

I want to track the update changes in a table via a trigger:
CREATE TABLE dbo.TrackTable(...columns same as target table)
GO
CREATE TRIGGER dboTrackTable
ON dbo.TargetTable
AFTER UPDATE
AS
INSERT INTO dbo.TrackTable (...columns)
SELECT (...columns)
FROM Inserted
However in real production some of the update queries select rows with vague conditions and update them all regardless of whether they are actually changed, like
UPDATE Targettable
SET customer_type = 'VIP'
WHERE 1 = 1
--or is_obsolete = 0 or register_date < '20160101' something
But due to table size and to analyze, I only want to choose those actually modified data for tracking. How to achieve this goal?
My track table has many columns (so I do not prefer checking inserted and deleted column one by one) but it seldom changes structure.
I guess the following code will be useful.
CREATE TABLE dbo.TrackTable(...columns same as target table)
GO
CREATE TRIGGER dboTrackTable
ON dbo.TargetTable
AFTER UPDATE
AS
INSERT INTO dbo.TrackTable (...columns)
SELECT *
FROM Inserted
EXCEPT
SELECT *
FROM Deleted
I realize this post is a couple months old now, but for anyone looking for a well-rounded answer:
To exit the trigger if no rows were affected on SQL Server 2016 and up, Microsoft recommends using the built-in ROWCOUNT_BIG() function in the Optimizing DML Triggers section of the Create Trigger documentation.
Usage:
IF ROWCOUNT_BIG() = 0
RETURN;
To ensure you are excluding rows that were not changed, you'll need to do a compare of the inserted and deleted tables inside the trigger. Taking your example code:
INSERT INTO dbo.TrackTable (...columns)
SELECT (...columns)
FROM Inserted i
INNER JOIN deleted d
ON d.[SomePrimaryKeyCol]=i.[SomePrimaryKeyCol] AND
i.customer_type<>d.customer_type
Microsoft documentation and w3schools are great resources for learning how to leverage various types of queries and trigger best practices.
Prevent trigger from doing anything if no rows changed.
Writing-triggers-the-right-way
CREATE TRIGGER the_trigger on dbo.Data
after update
as
begin
if ##ROWCOUNT = 0
return
set nocount on
/* Some Code Here */
end
Get a list of rows that changed:
CREATE TRIGGER the_trigger on dbo.data
AFTER UPDATE
AS
SELECT * from inserted
Previous stack overflow on triggers
#anna - as per #Oded's answer, when an update is performed, the rows are in the deleted table with the old information, and the inserted table with the new information –

Can an updated with nested select be considered atomic in Sybase?

I am trying something like this:
set rowcount 10 //fetch only 10 row
Update tableX set x=#BatchId where id in (select id from tableX where x=0)
basically mark 10 record as booked by supplying a batchId.
So my question is if this proc is executed in parallel then can I guarantee that update with select will be atomic and no invocation will select similar setof record from tableX for booking?
Thanks
To guarantee that no such overlaps occur, yo should:
(i) put BEGIN TRANSACTION - COMMIT around the statement
(ii) put the HOLDLOCK keyword directly behind 'tableX' (or run the whole statement at isolation level 3).

MS SQL locking for update

I am implementing a competition where there might be a lot of simultaneous entries. I am collecting some user data, which I am putting in one table called entries. I have another table of pre-generated unique discount codes in a table called discountCodes. I am then assigning one to each entry. I thought I would do this by putting a entry id in the discountCodes table.
As there may be a lot of concurrent users I think I should select the first unassigned row and then assign the entry id to that row. I need to make sure between picking an unassigned row and adding the entry id that another thread doesn't find the same row.
What is the best way of ensuring that the row doesn't get assigned twice?
Something can be done like
The following example sets the TRANSACTION ISOLATION LEVEL for the session. For each Transact-SQL statement that follows, SQL Server holds all of the shared locks until the end of the transaction. Source:MSDN
USE databaseName;
GO
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
GO
BEGIN TRANSACTION;
GO
SELECT *
FROM Table1;
GO
SELECT *
FROM Table2;
GO
COMMIT TRANSACTION;
GO
Read more SET TRANSACTION ISOLATION LEVEL
I would recommend building a bridge table instead of having the EntryId in the DiscountCodes table with an EntryId and a DiscountCodeId. Place a Unique Constraint on both of those fields.
This way your entry point will encounter a constraint violation when it tries to enter a duplicate.
WITH e AS
(
SELECT *,
ROW_NUMBER() OVER (ORDER BY id) rn
FROM entries ei
WHERE NOT EXISTS
(
SELECT NULL
FROM discountCodes dci
WHERE dci.entryId = ei.id
)
),
dc AS
(
SELECT *,
ROW_NUMBER() OVER (ORDER BY id) rn
FROM discountCodes
WHERE entryId IS NULL
)
UPDATE dc
SET dc.entryId = e.id
FROM e
JOIN dc
ON dc.rn = e.rn
I would just put an IDENTITY field on each table and let the corresponding entry match the corresponding discountCode - i.e. if you have a thousand discountCodes up front, your identity column in the discountCodes table will range from 1 to 1000. That will match your first 1 to 1000 entries. If you get more than 1000 entries, just add one discountCode per additional entry.
That way SQL Server handles all the problematic "get the next number in the sequence" logic for you.
You can try and use sp_getapplock and synchronize the write operation, just make sure it locks against the same hash, like
DECLARE #state Int
BEGIN TRAN
-- here we're using 1 sec's as time out, but you should determine what the min value is for your instance
EXEC #state = sp_getapplock 'SyncIt', 'Exclusive', 'Transaction', 1000
-- do insert/update/etc...
-- if you like you can be a little verbose and explicit, otherwise line below shouldn't be needed
EXEC sp_releaseapplock 'SyncIt', 'Transaction'
COMMIT TRAN

Records in deleted table that are not in the delete statement?

We have a large number of databases with the same schema, which each have a table with triggers to sync records with another table in a central database. When the table is updated, inserted into, or deleted from, the table in the central database also has a record updated, inserted, or deleted.
We've been having records mysteriously disappear from the table in the central database. When researching the problem I found that when the insert/delete trigger fires there are records in the deleted table that are not from the current delete statement. They aren't even records in the same database. They look like the old values record for update statements on the same table in another database.
All the information I could find says records in the deleted table should be from the statement that caused the trigger to fire.
Can anyone explain why I'm seeing this behavior instead?
EDIT: This is what the insert/delete trigger looks like:
DECLARE #TenantID INT
SELECT #TenantID = ID FROM [CentralDB]..Tenants WHERE db = DB_Name()
INSERT INTO [CentralDB].[dbo].[TenantUsers]
(..snipped list of columns...)
SELECT
...snipped list of columns...
FROM inserted
WHERE UserNameID NOT IN (0,6)
DELETE FROM [CentralDB]..TenantUsers WHERE UserNameID in
(SELECT UserNameID FROM DELETED WHERE UserNameID NOT IN (0,1,6))
And the update trigger:
DECLARE #TenantID INT
SELECT #TenantID = ID FROM [CentralDB]..Tenants WHERE db = DB_Name()
UPDATE [CentralDB].[dbo].[TenantUsers]
SET ...snipped list of columns...
FROM INSERTED i
WHERE i.UserNameID = TenantUsers.UserNameID
AND i.UserNameID NOT IN (0,6)
You've probably done this but if records are being deleted which ought not to be then i'd go round the db's (or write a script too) and check the triggers which contain the delete statements only fire for inserts and deletes.. Maybe there is a rouge trigger which fires on update and executes the delete command?
Its a long shot..
Other than this i would check there are no other triggers in the chain which can delete from the central db table.
there appear to be no obvious issues with the trigger design