Select for Update Lock - sql

I want to prevent simultaneous update(by multiple sessions) for my record in my stored procedure.
1.I am , using SELECT FOR UPDATE statement for the particular row , which i want to update it.
This will lock the record.
I am updating this record now and then commit it. So the lock is released and now the record is available for another user/session to work on with.
However , when i try to run the procedure , i am finding the simultaneous update is happening , means SELECT FOR UPDATE not working fine.
Pls provide some suggestions.
Sample Code is below :
IF THEN
// do something
ELSIF THEN
BEGIN
SELECT HIGH_NBR INTO P_NBR FROM ROUTE
WHERE LC_CD = <KL_LCD> AND ROUTE_NBR = <KL_ROUTE_NBR>
FOR UPDATE OF HIGH_NBR ;
UPDATE ROUTE SET HIGH_NBR = (HIGH_NBR + 1)
WHERE LC_CD = <KL_LCD> AND ROUTE_NBR = <KL_ROUTE_NBR>;
COMMIT;
END;
END IF;
In multiple user environment , i am observing the SELECT FOR UPDATE lock is not happening.
I just tested the scenario with two different computers (Sessions). Here is what i have did.
From One computer , executed SELECT FOR UPDATE statement -- Locking a row.
From Another computer , execute an UPDATE statement for the same record.
Update did not happen and the Sql execution of update statement is not completed , even after a long time.
When will be the lock released , if we issue an SELECT FOR UPDATE for a record.

first of all you need to set auto commit false before starting the query.
and to check your code is working you can use two Java treads with a cyclic barrier
and also you should add timestamps on you code to check the time of the code being reached.

Related

Stored Procedure for batch delete in Firebird

I need to delete a bunch of records (literally millions) but I don't want to make it in an individual statement, because of performance issues. So I created a view:
CREATE VIEW V1
AS
SELECT FIRST 500000 *
FROM TABLE
WHERE W_ID = 14
After that I do a bunch deletes for example:
DELETE FROM V1 WHERE TS < 2021-01-01
What I want is to import this logic in a While loop and in stored procedure. I tried SELECT COUNT query like this:
SELECT COUNT(*)
FROM TABLE
WHERE W_ID = 14 AND TS < 2021-01-01;
Can I use this number in the same procedure as a condition and how can I manage that?
This is what I have tried and I get an error
ERROR: Dynamic SQL Error; SQL error code = -104; Token unknown; WHILE
Code:
CREATE PROCEDURE DeleteBatch
AS
DECLARE VARIABLE CNT INT;
BEGIN
SELECT COUNT(*) FROM TABLE WHERE W_ID = 14 AND TS < 2021-01-01 INTO :cnt;
WHILE cnt > 0 do
BEGIN
IF (cnt > 0) THEN
DELETE FROM V1 WHERE TS < 2021-01-01;
END
ELSE break;
END
I just can't wrap my head around this.
To clarify, in my previous question I wanted to know how to manage the garbage_collection after many deleted records, and I did what was suggested - SELECT * FROM TABLE; or gfix -sweep and that worked very well. As mentioned in the comments the correct statement is SELECT COUNT(*) FROM TABLE;
After that another even bigger database was given to me - above 50 million. And the problem was the DB was very slow to operate with. And I managed to get the server it was on, killed with a DELETE statement to clean the database.
That's why I wanted to try deleting in batches. The slow-down problem there was purely hardware - HDD has gone, and we replaced it. After that there was no problem with executing statements and doing backup and restore to reclaim disk space.
Provided the data that you need to delete, doesn't ever need to be rollbacked once the stored procedure is kicked off, there is another way to handle massive DELETEs in a Stored Procedure.
The example stored procedure will delete the rows 500,000 at a time. It will loop until there aren't any more rows to delete. The AUTONOMOUS TRANSACTION will allow you to put each delete statement in its own transaction and it will commit immediately after the statement completes. This is issuing an implicit commit inside a stored procedure, which you normally can't do.
CREATE OR ALTER PROCEDURE DELETE_TABLEXYZ_ROWS
AS
DECLARE VARIABLE RC INTEGER;
BEGIN
RC = 9999;
WHILE (RC > 0) DO
BEGIN
IN AUTONOMOUS TRANSACTION DO
BEGIN
DELETE FROM TABLEXYZ ROWS 500000;
RC = ROW_COUNT;
END
END
SELECT COUNT(*)
FROM TABLEXYZ
INTO :RC;
END
because of performance issues
What are those exactly? I do not think you actually are improving performance, by just running delete in loops but within the same transaction, or even different TXs but within the same timespan. You seem to be solving some wrong problem. The issue is not how you create "garbage", but how and when Firebird "collects" it.
For example, Select Count(*) in Interbase/Firebird engines means natural scan over all the table and the garbage collection is often trigggered by it, which can itself get long if lot of garbage was created (and massive delete surely does, no matter if done by one million-rows statement or million of one-row statements).
How to delete large data from Firebird SQL database
If you really want to slow down deletion - you have to spread that activity round the clock, and make your client application call a deleting SP for example once every 15 minutes. You would have to add some column to the table, flagging it is marked for deletion and then do the job like that
CREATE PROCEDURE DeleteBatch(CNT INT)
AS
DECLARE ROW_ID INTEGER;
BEGIN
FOR SELECT ID FROM TABLENAME WHERE MARKED_TO_DEL > 0 INTO :row_id
DO BEGIN
CNT = CNT - 1;
DELETE FROM TABLENAME WHERE ID = :ROW_ID;
IF (CNT <= 0) THEN LEAVE;
END
SELECT COUNT(1) FROM TABLENAME INTO :ROW_id; /* force GC now */
END
...and every 15 minutes you do EXECUTE PROCEDURE DeleteBatch(1000).
Overall this probably would only be slower, because of single-row "precision targeting" - but at least it would spread the delays.
Use DELETE...ROWS.
https://firebirdsql.org/file/documentation/html/en/refdocs/fblangref25/firebird-25-language-reference.html#fblangref25-dml-delete-orderby
But as I already said in the answer to the previous question it is better to spend time investigating source of slowdown instead of workaround it by deleting data.

Roll-back last UPDATE SQL query in SQL Server 2014

I have accidentally executed the query:
UPDATE TableName
SET Name='Ram'
How can I undo this change?
Before running update, or deletes especially always test them. For an delete put the statement into a select block.
SELECT COUNT(NAME)
FROM TableName
WHERE
And make sure the number of records returned match what you are wanting to delete. FOr an update it is a little more envolved. You will have to use a transaction.
BEGIN TRANSACTION
UPDATE TableName
SET Name = 'Ram'
SELECT *
FROM TableName
WHERE Name = 'Ram'
--Rollback Transaction
--Commit Transaction
Based on what you did in the transaction just run the first part leaving commit and rollback commented out then the select will let you validate everything worked correctly then if it is what you want highlight and run just the COMMIT without the comments if it isn't what you wanted then highlight an run just the ROLLBACK without comments to undo it and try again. Hope this helps in the future.

##ROWCOUNT check not detecting zero row count

I have a SQL Server Stored Procedure (using SQL Server 2008 R2) where it performs several different table updates. When rows have been updated I want to record information in an Audit table.
Here is my pseudo code:
UPDATE tblName SET flag = 'Y' WHERE flag = 'N'
IF ##ROWCOUNT > 0
BEGIN
INSERT INTO auditTable...etc
END
Unfortunately, even when zero rows are updated it still records the action in the audit table.
Note: There are no related triggers on the table being updated.
Any ideas why this could be happening?
Any statement that is executed in T-SQL will set the ##rowcount, even the if statement, so the general rule is to capture the value in the statement following the statement you're interested in.
So after
update table set ....
you want
Select #mycount = ##Rowcount
Then you use this value to do your flow control or messages.
As the docs state, even a simple variable assignment will set the ##rowcount to 1.
This is why it's important in this case that if you want people to diagnose the problem then you need to provide the actual code, not pseudo code.

How do I use a database to manage a semaphore?

If several instances of the same code are running on different servers, I would like to use a database to make sure a process doesn't start on one server if it's already running on another server.
I could probably come-up with some workable SQL commands that used Oracle transaction processing, latches, or whatever, but I'd rather find something that's been tried and true.
Years ago a developer that was a SQL wiz had a single SQL transaction that took the semaphore and returned true if it got it, and returned false if it didn't get it. Then at the end of my processing, I'd need to run another SQL transaction to release the semaphore. It would be cool, but I don't know if it's possible for a database-supported semaphore to have a time-out. That would be a huge bonus to have a timeout!
EDIT:
Here are what might be some workable SQL commands, but no timeout except through a cron job hack:
---------------------------------------------------------------------
--Setup
---------------------------------------------------------------------
CREATE TABLE "JOB_LOCKER" ( "JOB_NAME" VARCHAR2(128 BYTE), "LOCKED" VARCHAR2(1 BYTE), "UPDATE_TIME" TIMESTAMP (6) );
CREATE UNIQUE INDEX "JOB_LOCKER_PK" ON "JOB_LOCKER" ("JOB_NAME") ;
ALTER TABLE "JOB_LOCKER" ADD CONSTRAINT "JOB_LOCKER_PK" PRIMARY KEY ("JOB_NAME");
ALTER TABLE "JOB_LOCKER" MODIFY ("JOB_NAME" NOT NULL ENABLE);
ALTER TABLE "JOB_LOCKER" MODIFY ("LOCKED" NOT NULL ENABLE);
insert into job_locker (job_name, locked) values ('myjob','N');
commit;
---------------------------------------------------------------------
--Execute at the beginning of the job
--AUTOCOMMIT MUST BE OFF!
---------------------------------------------------------------------
select * from job_locker where job_name='myjob' and locked = 'N' for update NOWAIT;
--returns one record if it's ok. Otherwise returns ORA-00054. Any other thread attempting to get the record gets ORA-00054.
update job_locker set locked = 'Y', update_time = sysdate where job_name = 'myjob';
--1 rows updated. Any other thread attempting to get the record gets ORA-00054.
commit;
--Any other thread attempting to get the record with locked = 'N' gets zero results.
--You could have code to pull for that job name and locked = 'Y' and if still zero results, add the record.
---------------------------------------------------------------------
--Execute at the end of the job
---------------------------------------------------------------------
update job_locker set locked = 'N', update_time = sysdate where job_name = 'myjob';
--Any other thread attempting to get the record with locked = 'N' gets no results.
commit;
--One record returned to any other thread attempting to get the record with locked = 'N'.
---------------------------------------------------------------------
--If the above 'end of the job' fails to run (system crash, etc)
--The 'locked' entry would need to be changed from 'Y' to 'N' manually
--You could have a periodic job to look for old timestamps and locked='Y'
--to clear those.
---------------------------------------------------------------------
You should look into DBMS_LOCK. Essentially, it allows for the enqueue locking mechanisms that Oracle uses internally, except that it allows you to define a lock type of 'UL' (user lock). Locks can be held shared or exclusive, and a request to take a lock, or to convert a lock from one mode to another, support a timeout.
I think it will do what you want.
Hope that helps.

Which SQL Read TRANSACTION ISOLATION LEVEL do I want for long running insert?

I have a long running insert transaction that inserts data into several related tables.
When this insert is running, I cannot perform a select * from MainTable. The select just spins its wheels until the insert is done.
I will be performing several of these inserts at the same/overlapping time. To check that the information is not inserted twice, I query the MainTable first to see if an entry is there and that its processed bit is not set.
During the insert transaction, it flips the MainTable processed bit for that row.
So I need to be able to read the table and also be able to tell if the specific row is currently being updated.
Any ideas on how to set this up in Microsoft SQL 2005? I am looking through the SET TRANSACTION ISOLATION LEVEL documentation.
Thank you,
Keith
EDIT: I do not think that the same insert batch will happen at the same time. These are binary files that are being processed and their data inserted into the database. I check that the file has not been processed before I parse and insert the data. When I do the check, if the file has not been seen before I do a quick insert into the MainTable with the processed bit set false.
Is there a way to lock the row being updated instead of the entire table?
You may want to rethink your process before you use READ UNCOMMITTED. There are many good reasons for isolated transactions. If you use READ UNCOMMITTED you may still get duplicates because there is a chance both of the inserts will check for updates at the same time and both not finding them creating duplicates. Try breaking it up into smaller batches or issue periodic COMMITS
EDIT
You can wrap the MainTable update in a transaction that will free up that table quicker but you still may get conflicts with the other tables.
ie
BEGIN TRANSACTION
SELECT #ProcessedBit = ProcessedBit FROM MainTable WHERE ID = XXX
IF #ProcessedBit = False
UPDATE MainTable SET ProcessedBit = True WHERE ID = XXX
COMMIT TRANSACTION
IF #ProcessedBit = False
BEGIN
BEGIN TRANSACTION
-- start long running process
...
COMMIT TRANSACTION
END
EDIT to enable error recovery
BEGIN TRANSACTION
SELECT #ProcessedStatus = ProcessedStatus FROM MainTable WHERE ID = XXX
IF #ProcessedStatus = 'Not Processed'
UPDATE MainTable SET ProcessedBit = 'Processing' WHERE ID = XXX
COMMIT TRANSACTION
IF #ProcessedStatus = 'Not Processed'
BEGIN
BEGIN TRANSACTION
-- start long running process
...
IF No Errors
BEGIN
UPDATE MainTable SET ProcessedStatus = 'Processed' WHERE ID = XXX
COMMIT TRANSACTION
ELSE
ROLLBACK TRANSACTION
END
The only isolation level that allows one transaction to read changes executed by another transaction in progress (before it commits) is:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED