Can updating same value from two sessions cause a deadlock in Oracle? - sql

I have an application where user clicks a button on UI which triggers an oracle function. I want to avoid multiple parallel runs of that function in DB (at a time there should be only one ongoing run). Can I use below custom locking mechanism to achieve this without worrying about deadlock?
My Approach -
Flag will be initially set as NULL
If multiple sessions triggers the function at the same time then only one of them will continue because only one them will be able to update flag
Function will update flag back to NULL after processing is done
DDL
create table test_oracle_lock (id int, flag varchar(1), primary key (id));
Custom Locking Code to avoid parallel runs
update test_oracle_lock set flag = 'In Use' where flag is null and id = 1;
updated_rows := sql%rowcount;
commit;
IF updated_rows = 0 then --if unable to update flag (i.e. unable to acquire custom lock) then exit function
EXIT;
ELSE
--execute all sql statements to process data and update flag back to NULL
update test_oracle_lock set flag = NULL where flag = 'In Use' and id = 1;
END IF;

A better solution is select ... for update on the table. Do that at the beginning and you don't have to worry about manual locking. It will only lock the row in question, so won't interfere with other sessions and if there is a rollback, it's automatically released.
select id into l_id from my_table for update.

Related

How to rewrite a trigger causing an update deadlock error

I have the following trigger. It is executed after an update statement from a REST API request. My issue is that when it is sent two API calls concurrently, the trigger is executed twice, therefore a deadlock error comes up because the transaction is not finished. Is there any SQL trickery I can try? Can I check in table B if it already has B_NEEDSRECALCULATION set to '+' and bypass the update, or maybe create a trigger on tableb?
CREATE OR ALTER TRIGGER TR_TESTTRIGGER_AU FOR TABLEA
ACTIVE AFTER UPDATE POSITION 0
AS
begin
if (rdb$get_context('USER_TRANSACTION', 'WISASCRIPT') = '1') then exit;
if ((new.A_FiledB IS NULL) and
(old.A_FieldA <> new.A_FieldB) )THEN
begin
UPDATE TableB
SET B_NEEDSRECALCULATION = '+'
WHERE (B_TESTID = new.A_TESTIDFK);
end
end

Is an insert and a corresponding trigger call an atomic process?

I have a table called LOCK and I want to ensure that not more than a single row with a given name and type WRITE exists. Though multiple rows with type READ and an equal name are allowed but only if there is no row with the same name and type WRITE.
create table "LOCK"
(
"LOCK_ID" NUMBER(19,0) NOT NULL,
"NAME" VARCHAR2(255 CHAR),
"TYPE" VARCHAR2(32 CHAR),
CONSTRAINT "SYS_LOCK_PK" PRIMARY KEY ("LOCK_ID")
);
Inserting a row has to be atomic, for instance no query with a following insert depending on the result of the query (because it could have changed meanwhile).
To ensure atomicy I created a trigger to check the initially mentioned condition (raising error on fail), which is occasionally ending up in various invalid states like two WRITE rows.
If inserts are executed sequentially the trigger works perfectly which leads to the assumption insert + trigger is no atomic process and if so, is there a safe way to solve my issue?
Here's the trigger:
create or replace trigger "LOCK_TRIGGER"
before insert on "LOCK"
referencing NEW AS NEW
for each row
declare
c integer := 0;
begin
select count(*) into c from "LOCK" where (:NEW.typ = 'WRITE' and name = :NEW.name) or (:NEW.typ = 'READ' and name = :NEW.name and typ = 'WRITE');
if (c > 0) then
raise_application_error(-20634, 'Nope!');
end if;
end;
Trigger doesn't help here for the multiuser environment. You need to serialize the access to the particular lock name. For this case I would go for the custom locks. The database package dbms_lock is used for this. You can create a function which does the following:
acquires custom lock for the incoming name - this lock should be created with an option that it is not released on commit/rollback
makes the validation in the table for the name
inserts the record if possible (if validation passed) and commits it
releases custom lock
returns the result (either OK or NOK)
Hope that helps.

Disable Trigger for a particular DELETE Query

I have a ruby app. The app is doing the insert,update and delete on a particular table.
It does 2 kinds of INSERT, one insert should insert a record in the table and also into trigger_logs table. Another insert is just to insert the record into the table and do nothing. Another way to put it is, one kind of insert should log that the 'insert' happened into another table and another kind of insert should just be a normal insert. Similarly, there are 2 kinds of UPDATE and DELETE also.
I have achieved the 2 types of INSERT and UPDATE using a trigger_disable. Please refer to the trigger code below.
So, when I do a INSERT, I will set the trigger_disable boolean to true if I don't want to log the trigger. Similarly I am doing for an UPDATE too.
But I am not able to differentiate between the 2 kinds of DELETE as I do for an INSERT or UPDATE. The DELETE action is logged for both kinds of DELETE.
NOTE: I am logging all the changes that are made under a certain condition, which will be determined by the ruby app. If the condition is not satisfied, I just need to do a normal INSERT, UPDATE or DELETE accordingly.
CREATE OR REPLACE FUNCTION notify_#{#table_name}()
RETURNS TRIGGER
LANGUAGE plpgsql
AS $$
DECLARE
changed_row_id varchar(100);
BEGIN
IF TG_OP = 'DELETE' THEN
-- When the trigger is due to a delete
IF (OLD.trigger_disable IS NULL)
OR (OLD.trigger_disable = false) THEN
-- Prevent the trigger if trigger_disable is 'true'
-- The Problem is here: This insertion into the
-- trigger_logs table happens
-- for all the delete statements.
-- But during certain deletes I should not
-- insert into trigger_logs
INSERT INTO trigger_logs (table_name, action, row_id, dirty)
VALUES (
'#{#table_name}',
CAST(TG_OP AS Text),
OLD.id,
true
) RETURNING id into changed_row_id;
END IF;
RETURN OLD;
ELSE
-- The trigger is due to a Insert or Update
IF (NEW.trigger_disable IS NULL)
OR (NEW.trigger_disable = false) THEN
-- Prevent the trigger if trigger_disable is 'true'
INSERT INTO trigger_logs (table_name, action, row_id, dirty)
VALUES (
'#{#table_name}',
CAST(TG_OP AS Text),
NEW.id,
true
) RETURNING id into changed_row_id;
ELSE
NEW.trigger_disable := false;
END IF;
RETURN NEW;
END IF;
END
I'm going to take a stab in the dark here and guess that you're trying to contextually control whether triggers get fired.
If so, perhaps you can use a session variable?
BEGIN;
SET LOCAL myapp.fire_trigger = 'false';
DELETE FROM ...;
COMMIT;
and in your trigger, test it:
IF current_setting('myapp.fire_trigger') = 'true' THEN
Note, however, that if the setting is missing from a session you won't get NULL, you'll get an error:
regress=> SELECT current_setting('myapp.xx');
ERROR: unrecognized configuration parameter "myapp.xx"
so you'll want to:
ALTER DATABASE mydb SET myapp.fire_trigger = 'true';
Also note that the parameter is text not boolean.
Finally, there's no security on session variables. So it's not useful for security audit, since anybody can come along and just SET myapp.fire_trigger = 'false'.
(If this doesn't meet your needs, you might want to re-think whether you should be doing this with triggers at all, rather than at the application level).

BEFORE DELETE trigger

How to stop from deleting a row, that has PK in another table (without FK) with a trigger?
Is CALL cannot_delete_error would stop from deleting?
This is what I've got so far.
CREATE TRIGGER T1
BEFORE DELETE ON Clients
FOR EACH ROW
BEGIN
SELECT Client, Ref FROM Clients K, Invoice F
IF F.Client = K.Ref
CALL cannot_delete_error
END IF;
END
Use an 'INSTEAD OF DELETE' trigger.
Basically, you can evaluate whether or not you should the delete the item. In the trigger you can ultimately decide to delete the item like:
--test to see if you actually should delete it.
--if you do decide to delete it
DELETE FROM MyTable
WHERE ID IN (SELECT ID FROM deleted)
One side note, remember that the 'deleted' table may be for several rows.
Another side note, try to do this outside of the db if possible! Or with a preceding query. Triggers are downright difficult to maintain. A simple query, or function (e.g. dbo.udf_CanIDeleteThis()') can be much more versatile.
If you're using MySQL 5.5 or up you can use SIGNAL
DELIMITER //
CREATE TRIGGER tg_fk_check
BEFORE DELETE ON clients
FOR EACH ROW
BEGIN
IF EXISTS(SELECT *
FROM invoices
WHERE client_id = OLD.client_id) THEN
SIGNAL sqlstate '45000'
SET message_text = 'Cannot delete a parent row: child rows exist';
END IF;
END//
DELIMITER ;
Here is SQLFiddle demo. Uncomment the last delete and click Build Schema to see it in action.

Compare-and-swap operation for row with selected id in database (Oracle or ANSI SQL)?

I want to implement failsafe persistent synchronization in application based on database (Oracle in my case, but like to see ANSI SQL solution).
I work with tasks which can be run in different threads, applications or servers.
Each type of task (I differ them by ID) may not run in concurrent - that is why I need synchronization. All have access to DB so it is good place for synchronization!
Each thread/application/server can fail, so I need a way to remove acquired lock from ID after timeout.
The first that come to mind is to use table with:
ID
STATE
TS
fields. All I need is atomic operations which:
try to change STATE value from completed to executing (to synchronize) and set TS to current time. Return false if STATE is not completed.
try to change STATE value from executing/recovering to recovering if sysdate - TS > delay and set TS to current time (to be failsafe). Else return false.
SQL update statement mostly make that I want:
update TASK set STATE = 'executing', TS = sysdate
where ID = :id and STATE = 'completed'
and:
update TASK set STATE = 'recovering', TS = sysdate
where ID = :id and STATE in ('executing', 'recovering')
and sysdate - TS > :delay
Only one issue that I see - how to get know (from Java application through JDBC) if update actually performed or not (in order to be true compare-and-swap operation)? May be by getting updated row number (is this info available through JDBC)?
Is I am correct with my assumption that update is atomic for where condition?
Is there another way to implement failsafe persistent synchronization in application based on database?
PS My question is differ from:
atomic compare and swap in a database (it have 2 select and update statement, I end with one).
This easiest way might be to create and run a stored procedure which could then return what the result of the change was.
Edit (expanded answer): Sorry I thought you were looking for a way to get the answer back. I would do something like the following (although I haven't got an oracle instance at home to test it on, sorry)
function GetLockForRun(p_id task.id%TYPE)
return VARCHAR2
declare
l_result VARCHAR2;
begin
select STATE, TS
into l_state, l_ts
from Task
where ID = p_id;
for update; --this locks the row until you commit
if (state == 'completed' or sysdate - TS > delay)
update TASK
set STATE = 'executing', TS = sysdate
where ID = p_id;
l_result := "OK";
else
l_result := "Do not run";
end if
commit; --release for update lock
return l_result;
end;
This function locks the row for the length of its execution so no other process can edit it, this means only one process can run this at a time. Since this procedure will run in the database it will finish so you don't have to worry if the java process dies. I guess the only downside is that this is blocking but only for the run of this function which is short. If you really can't have that then you could try making the select for update not wait; off the top of my head I can not remember how to do this.