How can I lock a single row in Oracle SQL - sql

It seems simple but I struggle with it. The question is how can I lock for example a single row from the table JOBS with JOB_ID = IT_PROG. I want to do it, because I want to try an exception from a procedure, where it displays you a message when you try to update a locked row. Thanks in advance for your time.

You may lock the record as described in other answers, but you will not see any exception while UPDATEing this row.
The UPDATE statement will wait until the lock will be released, i.e. the session with SELECT ... FOR UPDATE commits. After that the UPDATE will be performed.
The only exeption you can manage is DEADLOCK, i.e.
Session1 SELECT FOR UPDATE record A
Session2 SELECT FOR UPDATE record B
Session1 UPDATE record B --- wait as record locked
Session2 UPDATE record A --- deadlock as 1 is waiting on 2 and 2 waiting on 1

AskTom has an example of what you're trying to do:
https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:4515126525609
From AskTom:
declare
resource_busy exception;
pragma exception_init( resource_busy, -54 );
success boolean := False;
begin
for i in 1 .. 3
loop
exit when (success);
begin
select xxx from yyy where .... for update NOWAIT;
success := true;
exception
when resource_busy then
dbms_lock.sleep(1);
end;
end loop;
if ( not success ) then
raise_application_error( -20001, 'row is locked by another session' );
end if;
end;
This attempts to get a lock, and if it can't get one (i.e. ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired is raised) it will raise an error.

It's not possible to manually lock a row in Oracle. You can manually lock an object,though. Exclusive lock is placed on the row automatically when performing DML operations to ensure that no other sessions could update the same row or any other DDL operations could drop that row- other sessions can read it any time.
The first session to request the lock on the rows gets it and any other sessions requesting write access must wait.
If you don't want to be get locked ,that means , if you don't want to wait , you can use
Select .... For update ( nowait / wait(n) ) ( skiplocked) statement

Related

Can updating same value from two sessions cause a deadlock in Oracle?

I have an application where user clicks a button on UI which triggers an oracle function. I want to avoid multiple parallel runs of that function in DB (at a time there should be only one ongoing run). Can I use below custom locking mechanism to achieve this without worrying about deadlock?
My Approach -
Flag will be initially set as NULL
If multiple sessions triggers the function at the same time then only one of them will continue because only one them will be able to update flag
Function will update flag back to NULL after processing is done
DDL
create table test_oracle_lock (id int, flag varchar(1), primary key (id));
Custom Locking Code to avoid parallel runs
update test_oracle_lock set flag = 'In Use' where flag is null and id = 1;
updated_rows := sql%rowcount;
commit;
IF updated_rows = 0 then --if unable to update flag (i.e. unable to acquire custom lock) then exit function
EXIT;
ELSE
--execute all sql statements to process data and update flag back to NULL
update test_oracle_lock set flag = NULL where flag = 'In Use' and id = 1;
END IF;
A better solution is select ... for update on the table. Do that at the beginning and you don't have to worry about manual locking. It will only lock the row in question, so won't interfere with other sessions and if there is a rollback, it's automatically released.
select id into l_id from my_table for update.

Why row is visible to several sessions when selected FOR UPDATE SKIP LOCKED?

Assume there are two tables TST_SAMPLE (10000 rows) and TST_SAMPLE_STATUS (empty).
I want to iterate over each record in TST_SAMPLE and add exactly one record to TST_SAMPLE_STATUS accordingly.
In a single thread that would be simply this:
begin
for r in (select * from TST_SAMPLE)
loop
insert into TST_SAMPLE_STATUS(rec_id, rec_status)
values (r.rec_id, 'TOUCHED');
end loop;
commit;
end;
/
In a multithreaded solution there's a situation, which is not clear to me.
So could you explain what causes processing one row of TST_SAMPLE several times.
Please, see details below.
create table TST_SAMPLE(
rec_id number(10) primary key
);
create table TST_SAMPLE_STATUS(
rec_id number(10),
rec_status varchar2(10),
session_id varchar2(100)
);
begin
insert into TST_SAMPLE(rec_id)
select LEVEL from dual connect by LEVEL <= 10000;
commit;
end;
/
CREATE OR REPLACE PROCEDURE tst_touch_recs(pi_limit int) is
v_last_iter_count int;
begin
loop
v_last_iter_count := 0;
--------------------------
for r in (select *
from TST_SAMPLE A
where rownum < pi_limit
and NOT EXISTS (select null
from TST_SAMPLE_STATUS B
where B.rec_id = A.rec_id)
FOR UPDATE SKIP LOCKED)
loop
insert into TST_SAMPLE_STATUS(rec_id, rec_status, session_id)
values (r.rec_id, 'TOUCHED', SYS_CONTEXT('USERENV', 'SID'));
v_last_iter_count := v_last_iter_count + 1;
end loop;
commit;
--------------------------
exit when v_last_iter_count = 0;
end loop;
end;
/
In the FOR-LOOP I try to iterate over rows that:
- has no status (NOT EXISTS clause)
- is not currently locked in another thread (FOR UPDATE SKIP LOCKED)
There's no requirement for the exact amount of rows in an iteration.
Here pi_limit is just a maximal size of one batch. The only thing needed is to process each row of TST_SAMPLE in exactly one session.
So let's run this procedure in 3 threads.
declare
v_job_id number;
begin
dbms_job.submit(v_job_id, 'begin tst_touch_recs(100); end;', sysdate);
dbms_job.submit(v_job_id, 'begin tst_touch_recs(100); end;', sysdate);
dbms_job.submit(v_job_id, 'begin tst_touch_recs(100); end;', sysdate);
commit;
end;
Unexpectedly, we see that some rows were processed in several sessions
select count(unique rec_id) AS unique_count,
count(rec_id) AS total_count
from TST_SAMPLE_STATUS;
| unique_count | total_count |
------------------------------
| 10000 | 17397 |
------------------------------
-- run to see duplicates
select *
from TST_SAMPLE_STATUS
where REC_ID in (
select REC_ID
from TST_SAMPLE_STATUS
group by REC_ID
having count(*) > 1
)
order by REC_ID;
Please, help to recognize mistakes in implementation of procedure tst_touch_recs.
Here's a little example that shows why you're reading rows twice.
Run the following code in two sessions, starting the second a few seconds after the first:
declare
cursor c is
select a.*
from TST_SAMPLE A
where rownum < 10
and NOT EXISTS (select null
from TST_SAMPLE_STATUS B
where B.rec_id = A.rec_id)
FOR UPDATE SKIP LOCKED;
type rec is table of c%rowtype index by pls_integer;
rws rec;
begin
open c; -- data are read consistent to this time
dbms_lock.sleep ( 10 );
fetch c
bulk collect
into rws;
for i in 1 .. rws.count loop
dbms_output.put_line ( rws(i).rec_id );
end loop;
commit;
end;
/
You should see both sessions display the same rows.
Why?
Because Oracle Database has statement-level consistency, the result set for both is frozen when you open the cursor.
But when you have SKIP LOCKED, the FOR UPDATE locking only kicks in when you fetch the rows.
So session 1 starts and finds the first 9 rows not in TST_SAMPLE_STATUS. It then waits 10 seconds.
Provided you start session 2 within these 10 seconds, the cursor will look for the same nine rows.
At this point no rows are locked.
Now, here's where it gets interesting.
The sleep in the first session will finish. It'll then fetch the rows, locking them and skipping any that are already locked.
Very shortly after, it'll commit. Releasing the lock.
A few moments later, session 2 comes to read these rows. At this point the rows are not locked!
So there's nothing to skip.
How exactly you solve this depends on what you're trying to do.
Assuming you can't move to a set-based approach, you could make the transactions serializable by adding:
set transaction isolation level serializable;
before the cursor loop. This will then move to transaction-level consistency. Enabling the database to detect "something changed" when fetching rows.
But you'll need to catch ORA-08177: can't serialize access for this transaction errors in your within the outer loop. Or any process that re-reads the same rows will drop out at this point.
Or, as commenters have suggested used Advanced Queueing.

Insert in Cursor loop and Exception handling - Oracle

I have a table called DUD which is pretty much Static (which means once the data is inserted it never changes). I query data from DUD and populate a staging table CAR from which Webmethods polls everyday.
Usually it is 10 records for every transaction. There are two transactions per day.
I have written a Cursor to do this and I am happy with the logic.
The output will look like:
TRANSID A B C cnt
------ --- --- -- ---
A123 JIM NY ACT 1
A123 BOB CA ACT 2
A123 PIN GA ACT 3
--------------------------
A124 MIK CA ACT 1
A124 JON MA ACT 2
A124 CON MY ACT 3
A124 JIB CA ACT 4
What really concerns me and question is:
If the insert in the loop fails, it should rollback all the inserts made in this transaction and do not end up with partially inserted records or orphaned records for a transaction. I commit only after the loop is completed no exception was raised.
When exception happens, I also want to know which record failed to insert. I hope to catch this in my exception and call a function in the exception handler that will insert this information in to an Error table for further investigation.
The auto commit is disabled in the DB. But will oracle consider ALL the insert through a loop as one transaction or independent transactions and insert it immediately?
Code
DECLARE TYPE message_info
IS
RECORD
(
message_code INTEGER,
message VARCHAR2(500));
msg MESSAGE_INFO;
tranid NUMBER;
p_error EXCEPTION;
CURSOR b1 IS
SELECT *
FROM dud
WHERE dud.DATE = SYSDATE
AND dud.status='ACTIVE';
BEGIN
IF *CHECK SOME condition*
BEGIN
tranid = seq_transid.NEXTVAL;
--- Transaction id is unique per transaction.
--- All 10 records will have same transaction id.
FOR b1 IN c1
LOOP
i=b1%rowcount;
INSERT INTO car
(
transid,
a,
b,
c,
cnt
)
VALUES
(
tranid,
b1.a,
b1.b,
b1.c,
i
);
END LOOP;
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;
msg.message := 'Unable to insert into CAR Table';
RAISE p_error;
END;
COMMIT;
EXCEPTION
WHEN p_error THEN
error.post_msg (msg.message, SQLCODE,SQLERRM,USER);
END IF;
END;
You can use FORALL statement also in this situation....
you are using cursor and in loop you are inserting into tables..
you can directly insert all the transactions in one shot. this will increase the performance of your code as well and this will give you surety also that all transaction inserted or none of them have inserted...
Basically, in the situation you describe, there shouldn't be a problem, since you commit only after rollback.
But maybe it would be better to use AUTONOMOUS_TRANSACTION for the function that logs the error. In general, one should avoid using it, but since you need to do some atomic transaction (for logging the record) it might be better, so you'll be sure that this commit will not commit the inserts made in the loop.

how to fetch, delete, commit from cursor

I am trying to delete a lot of rows from a table. I want to try the approach of putting rows I want to delete into a cursor and then keep doing fetch, delete, commit on each row of the cursor until it is empty.
In the below code we are fetching rows and putting them in a TYPE.
How can I modify the below code to remove the TYPE from the picture and just simply do fetch,delete,commit on the cursor itself.
OPEN bulk_delete_dup;
LOOP
FETCH bulk_delete_dup BULK COLLECT INTO arr_inc_del LIMIT c_rows;
FORALL i IN arr_inc_del.FIRST .. arr_inc_del.LAST
DELETE FROM UIV_RESPONSE_INCOME
WHERE ROWID = arr_inc_del(i);
COMMIT;
arr_inc_del.DELETE;
EXIT WHEN bulk_delete_dup%NOTFOUND;
END LOOP;
arr_inc_del.DELETE;
CLOSE bulk_delete_dup;
Why do you want to commit in batches? That is only going to slow down your processing. Unless there are other sessions that are trying to modify the rows you are trying to delete, which seems problematic for other reasons, the most efficient approach would be simply to delete the data with a single DELETE, i.e.
DELETE FROM uiv_response_income uri
WHERE EXISTS(
SELECT 1
FROM (<<bulk_delete_dup query>>) bdd
WHERE bdd.rowid = uri.rowid
)
Of course, there may well be a more optimal way of writing this depending on how the query behind your cursor is designed.
If you really want to eliminate the BULK COLLECT (which will slow the process down substantially), you could use the WHERE CURRENT OF syntax to do the DELETE
SQL> create table foo
2 as
3 select level col1
4 from dual
5 connect by level < 10000;
Table created.
SQL> ed
Wrote file afiedt.buf
1 declare
2 cursor c1 is select * from foo for update;
3 l_rowtype c1%rowtype;
4 begin
5 open c1;
6 loop
7 fetch c1 into l_rowtype;
8 exit when c1%notfound;
9 delete from foo where current of c1;
10 end loop;
11* end;
SQL> /
PL/SQL procedure successfully completed.
Be aware, however, that since you have to lock the row (with the FOR UPDATE clause), you cannot put a commit in the loop. Doing a commit would release the locks you had requested with the FOR UPDATE and you'll get an ORA-01002: fetch out of sequence error
SQL> ed
Wrote file afiedt.buf
1 declare
2 cursor c1 is select * from foo for update;
3 l_rowtype c1%rowtype;
4 begin
5 open c1;
6 loop
7 fetch c1 into l_rowtype;
8 exit when c1%notfound;
9 delete from foo where current of c1;
10 commit;
11 end loop;
12* end;
SQL> /
declare
*
ERROR at line 1:
ORA-01002: fetch out of sequence
ORA-06512: at line 7
You may not get a runtime error if you remove the locking and avoid the WHERE CURRENT OF syntax, deleting the data based on the value(s) you fetched from the cursor. However, this is still doing a fetch across commit which is a poor practice and radically increases the odds that you will, at least intermittently, get an ORA-01555: snapshot too old error. It will also be painfully slow compared to the single SQL statement or the BULK COLLECT option.
SQL> ed
Wrote file afiedt.buf
1 declare
2 cursor c1 is select * from foo;
3 l_rowtype c1%rowtype;
4 begin
5 open c1;
6 loop
7 fetch c1 into l_rowtype;
8 exit when c1%notfound;
9 delete from foo where col1 = l_rowtype.col1;
10 commit;
11 end loop;
12* end;
SQL> /
PL/SQL procedure successfully completed.
Of course, you also have to ensure that your process is restartable in case you process some subset of rows and have some unknown number of interim commits before the process dies. If the DELETE is sufficient to cause the row to no longer be returned from your cursor, your process is probably already restartable. But in general, that's a concern if you try to break a single operation into multiple transactions.
A few things. It seems from your company's "no transaction over 8 second" rule (8 seconds, you in Texas?), you have a production db instance that traditionally supported apps doing OLTP stuff (insert 1 row, update 2 rows, etc), and has now also become the batch processing db (remove 50% of the rows and replace with 1mm new rows).
Batch processing should be separated from OLTP instance. In a batch ("data factory") instance, I wouldn't try deleting in this case, I'd probably do a CTAS, drop old table, rename new table, rebuild indexes/stats, recompile invalid objs approach.
Assuming you are stuck doing batch processing in your "8 second" instance, you'll probably find your company will ask for more and more of this in the future, so ask the DBAs for as much rollback as you can get, and hope you don't get a snapshot too old by fetching across commits (cursor select driving the deletes, commit every 1000 rows or so, delete using rowid).
If DBAs cant help, you may be able to first create a temp table containing the rowids that you wish to delete, and then loop through the temp table to delete from main table (avoid fetching across commits), but your company will probably have some rule against this as well as this is another (basic) batch technique.
Something like:
declare
-- assuming index on someCol
cursor sel_cur is
select rowid as row_id
from someTable
where someCol = 'BLAH';
v_ctr pls_integer := 0;
begin
for rec in sel_cur
loop
v_ctr := v_ctr + 1;
-- watch out for snapshot too old...
delete from someTable
where rowid = rec.row_id;
if (mod(v_ctr, 1000) = 0) then
commit;
end if;
end loop;
commit;
end;

Continuing Inserts in Oracle when exception is raised

I'm working on migration of data from a legacy system into our new app(running on Oracle Database, 10gR2). As part of the migration, I'm working on a script which inserts the data into tables that are used by the app.
The number of rows of data that are imported runs into thousands, and the source data is not clean (unexpected nulls in NOT NULL columns, etc). So while inserting data through the scripts, whenever such an exception occurs, the script ends abruptly, and the whole transaction is rolled back.
Is there a way, by which I can continue inserts of data for which the rows are clean?
Using NVL() or COALESCE() is not an option, as I'd like to log the rows causing the errors so that the data can be corrected for the next pass.
EDIT: My current procedure has an exception handler, I am logging the first row which causes the error. Would it be possible for inserts to continue without termination, because right now on the first handled exception, the procedure terminates execution.
If the data volumes were higher, row-by-row processing in PL/SQL would probably be too slow.
In those circumstances, you can use DML error logging, described here
CREATE TABLE raises (emp_id NUMBER, sal NUMBER
CONSTRAINT check_sal CHECK(sal > 8000));
EXECUTE DBMS_ERRLOG.CREATE_ERROR_LOG('raises', 'errlog');
INSERT INTO raises
SELECT employee_id, salary*1.1 FROM employees
WHERE commission_pct > .2
LOG ERRORS INTO errlog ('my_bad') REJECT LIMIT 10;
SELECT ORA_ERR_MESG$, ORA_ERR_TAG$, emp_id, sal FROM errlog;
ORA_ERR_MESG$ ORA_ERR_TAG$ EMP_ID SAL
--------------------------- -------------------- ------ -------
ORA-02290: check constraint my_bad 161 7700
(HR.SYS_C004266) violated
Using PLSQL you can perform each insert in its own transaction (COMMIT after each) and log or ignore errors with an exception handler that keeps going.
Try this:
for r_row in c_legacy_data loop
begin
insert into some_table(a, b, c, ...)
values (r_row.a, r_row.b, r_row.c, ...);
exception
when others then
null; /* or some extra logging */
end;
end loop;
DECLARE
cursor;
BEGIN
loop for each row in cursor
BEGIN -- subBlock begins
SAVEPOINT startTransaction; -- mark a savepoint
-- do whatever you have do here
COMMIT;
EXCEPTION
ROLLBACK TO startTransaction; -- undo changes
END; -- subBlock ends
end loop;
END;
If you use sqlldr you can specify to continue loading data, and all the 'bad' data will be skipped and logged in a separate file.