I am trying to delete many thousands of records using BULK COLLECT concept. The table's record uniqueness is defined by 4 columns viz., NUMBER_ID, JOB_VALUE, JOB_TYPE, JOB_DATE. I need to delete records based on these four columns. I wrote the following code. However it errors out saying 1. "V_JOB_VAL' is inappropriate as the left hand side of an assignment statement" 2. cursor attribute may not be applied to non-cursor "L_JOB_RID". Could you please suggest where I am doing wrong?
** Edited Code **
ALTER TABLE JOB_SAMPLE NOLOGGING;
ALTER SESSION ENABLE PARALLEL DML;
DECLARE
TYPE V_JOB_RID IS TABLE OF JOB_SAMPLE.JOB_ROW_ID%TYPE;
TYPE V_JOB_DT IS TABLE OF JOB_SAMPLE.JOB_DATE%TYPE;
TYPE V_JOB_TY IS TABLE OF JOB_SAMPLE.JOB_TYPE%TYPE;
TYPE V_JOB_VAL IS TABLE OF JOB_SAMPLE.JOB_VALUE%TYPE;
L_JOB_RID V_JOB_RID := V_JOB_RID();
L_JOB_DT V_JOB_DT := V_JOB_DT();
L_JOB_TY V_JOB_TY := V_JOB_TY();
L_JOB_VAL V_JOB_VAL := V_JOB_VAL();
L_DELETE_BUFFER PLS_INTEGER := 50000;
CURSOR CDELETE IS
SELECT /*+ PARALLEL(10) */
T3.JOB_ROW_ID,
T3.JOB_DATE,
T3.JOB_TYPE,
T3.JOB_VALUE
FROM JOB_T1 T1,
JOB_T2 T2,
JOB_SAMPLE T3
WHERE T1.ROW_ID = T2.JOB_T1_ID
AND T3.JOB_ROW_ID = T1.JOB_ID
AND T2.T2_VAL = T3.JOB_VALUE
AND T1.NAME = T3.JOB_TYPE
AND TO_DATE(T2.T2_DT,'DD-MON-YY') = TO_DATE(T3.JOB_DATE,'DD-MON-YY');
BEGIN
OPEN CDELETE;
LOOP
FETCH CDELETE BULK COLLECT INTO L_JOB_RID, L_JOB_DT, L_JOB_TY, L_JOB_VAL LIMIT L_DELETE_BUFFER;
EXIT WHEN L_JOB_RID.COUNT < L_DELETE_BUFFER;
FORALL I IN 1..L_JOB_RID.COUNT
DELETE /*+ PARALLEL(10) */ JOB_SAMPLE
WHERE JOB_ROW_ID = L_JOB_RID(I)
AND JOB_DATE = L_JOB_DT(I)
AND JOB_TYPE = L_JOB_TY(I)
AND JOB_VALUE = L_JOB_VAL(I);
COMMIT;
END LOOP;
CLOSE CDELETE;
COMMIT;
END;
ALTER SESSION DISABLE PARALLEL DML;
Please let me know.
Thank you !
Start by changing
FETCH CDELETE BULK COLLECT INTO L_JOB_RID, L_JOB_DT, L_JOB_TY, V_JOB_VAL
into
FETCH CDELETE BULK COLLECT INTO L_JOB_RID, L_JOB_DT, L_JOB_TY, L_JOB_VAL
Related
I am trying to do a bulk update in Oracle. The operation involves updating millions of records in target table based on column values from another table.The scenario goes like this.
I have 2 tables:
T1 (source)
t1_col1 t1_col2 t1_col3 t1_col4 t1_col5
T2 (target)
t2_col1 t2_col2 t2_col3 t2_col4 t2_col5 t2_col6
I need to do an update like this:
update t1
set t2_col1 = t1_col1,
t2_col2 = t1_col2,
t2_col3 = sysdate
where t2_col4 = t1_col4
and t2_col5 = t1_col5
and t2_col6 = null
How can I achieve the above update for multiple columns comprising millions of records. Going through the forum, i understand this should be done utilizing bulk collect, cursor, for all, limit etc. I am unable to come up with a query based on this though. Appreciate your help.
The update of millions of columns is always slow. No matter if you do this using UPDATE or BULK COLLECT. The fastest method is to CREATE AS SELECT.
Example, HR scheme:
Creating a target table, EMP, for example as a copy of employees table:
create table emp as select * from employees; update emp set salary = salary * 2;
Create the THIRD temporary table, with new values your update, here I change SALARY column
create table emp_new as SELECT
emp.employee_id,
emp.first_name,
emp.last_name,
emp.email,
emp.phone_number,
emp.hire_date,
emp.job_id,
e.salary,
emp.commission_pct,
emp.manager_id,
emp.department_id FROM
employees e, emp WHERE emp.employee_id = e.employee_id;
Drop the original target table:
create table emp_arch as select * from emp; -- you can want to archive data drop table emp;
Rename the third table to the target table_name
rename emp_new to emp;
That's all, it should be the fastest way. You have to remember that you should add constraints and indexes the to new table.
I did once comparison of how to load data using different methods so if you interested you can check this. At the and there are scripts.
https://how2ora-en.blogspot.com/2020/03/loading-data-sql-vs-forall-perf-tests.html
If you want to use bulk collect it could look like this:
declare
type tabEmp is table of employees%rowtype;
tEmp tabEmp;
bulk_errors exception;
pragma exception_init(bulk_errors, -24381);
nErrCnt number;
n_errcode number;
v_msg varchar2(4000);
n_idx number;
cursor cur is select * from employees;
begin
open cur;
loop
fetch cur bulk collect into tEmp;
exit when tEmp.count =0;
BEGIN
forall i in 1..tEmp.count
update emp set salary = tEmp(i).salary;
exception when bulk_errors then
nErrCnt:= sql%bulk_exceptions.count;
for i in 1 .. nErrCnt
loop
n_errcode := sql%bulk_exceptions(i).error_code;
v_msg := sqlerrm(-n_errcode);
n_idx := sql%bulk_exceptions(i).error_index;
dbms_output.put_line(n_errcode);
end loop;
end;
end loop;
close cur;
end;
Assume there are two tables TST_SAMPLE (10000 rows) and TST_SAMPLE_STATUS (empty).
I want to iterate over each record in TST_SAMPLE and add exactly one record to TST_SAMPLE_STATUS accordingly.
In a single thread that would be simply this:
begin
for r in (select * from TST_SAMPLE)
loop
insert into TST_SAMPLE_STATUS(rec_id, rec_status)
values (r.rec_id, 'TOUCHED');
end loop;
commit;
end;
/
In a multithreaded solution there's a situation, which is not clear to me.
So could you explain what causes processing one row of TST_SAMPLE several times.
Please, see details below.
create table TST_SAMPLE(
rec_id number(10) primary key
);
create table TST_SAMPLE_STATUS(
rec_id number(10),
rec_status varchar2(10),
session_id varchar2(100)
);
begin
insert into TST_SAMPLE(rec_id)
select LEVEL from dual connect by LEVEL <= 10000;
commit;
end;
/
CREATE OR REPLACE PROCEDURE tst_touch_recs(pi_limit int) is
v_last_iter_count int;
begin
loop
v_last_iter_count := 0;
--------------------------
for r in (select *
from TST_SAMPLE A
where rownum < pi_limit
and NOT EXISTS (select null
from TST_SAMPLE_STATUS B
where B.rec_id = A.rec_id)
FOR UPDATE SKIP LOCKED)
loop
insert into TST_SAMPLE_STATUS(rec_id, rec_status, session_id)
values (r.rec_id, 'TOUCHED', SYS_CONTEXT('USERENV', 'SID'));
v_last_iter_count := v_last_iter_count + 1;
end loop;
commit;
--------------------------
exit when v_last_iter_count = 0;
end loop;
end;
/
In the FOR-LOOP I try to iterate over rows that:
- has no status (NOT EXISTS clause)
- is not currently locked in another thread (FOR UPDATE SKIP LOCKED)
There's no requirement for the exact amount of rows in an iteration.
Here pi_limit is just a maximal size of one batch. The only thing needed is to process each row of TST_SAMPLE in exactly one session.
So let's run this procedure in 3 threads.
declare
v_job_id number;
begin
dbms_job.submit(v_job_id, 'begin tst_touch_recs(100); end;', sysdate);
dbms_job.submit(v_job_id, 'begin tst_touch_recs(100); end;', sysdate);
dbms_job.submit(v_job_id, 'begin tst_touch_recs(100); end;', sysdate);
commit;
end;
Unexpectedly, we see that some rows were processed in several sessions
select count(unique rec_id) AS unique_count,
count(rec_id) AS total_count
from TST_SAMPLE_STATUS;
| unique_count | total_count |
------------------------------
| 10000 | 17397 |
------------------------------
-- run to see duplicates
select *
from TST_SAMPLE_STATUS
where REC_ID in (
select REC_ID
from TST_SAMPLE_STATUS
group by REC_ID
having count(*) > 1
)
order by REC_ID;
Please, help to recognize mistakes in implementation of procedure tst_touch_recs.
Here's a little example that shows why you're reading rows twice.
Run the following code in two sessions, starting the second a few seconds after the first:
declare
cursor c is
select a.*
from TST_SAMPLE A
where rownum < 10
and NOT EXISTS (select null
from TST_SAMPLE_STATUS B
where B.rec_id = A.rec_id)
FOR UPDATE SKIP LOCKED;
type rec is table of c%rowtype index by pls_integer;
rws rec;
begin
open c; -- data are read consistent to this time
dbms_lock.sleep ( 10 );
fetch c
bulk collect
into rws;
for i in 1 .. rws.count loop
dbms_output.put_line ( rws(i).rec_id );
end loop;
commit;
end;
/
You should see both sessions display the same rows.
Why?
Because Oracle Database has statement-level consistency, the result set for both is frozen when you open the cursor.
But when you have SKIP LOCKED, the FOR UPDATE locking only kicks in when you fetch the rows.
So session 1 starts and finds the first 9 rows not in TST_SAMPLE_STATUS. It then waits 10 seconds.
Provided you start session 2 within these 10 seconds, the cursor will look for the same nine rows.
At this point no rows are locked.
Now, here's where it gets interesting.
The sleep in the first session will finish. It'll then fetch the rows, locking them and skipping any that are already locked.
Very shortly after, it'll commit. Releasing the lock.
A few moments later, session 2 comes to read these rows. At this point the rows are not locked!
So there's nothing to skip.
How exactly you solve this depends on what you're trying to do.
Assuming you can't move to a set-based approach, you could make the transactions serializable by adding:
set transaction isolation level serializable;
before the cursor loop. This will then move to transaction-level consistency. Enabling the database to detect "something changed" when fetching rows.
But you'll need to catch ORA-08177: can't serialize access for this transaction errors in your within the outer loop. Or any process that re-reads the same rows will drop out at this point.
Or, as commenters have suggested used Advanced Queueing.
I have about 94,000 records that need to be deleted, but I have been told not to delete all at once because it will slow the performance due to the delete trigger. What would be the best solution to accomplish this? I was thinking of an additional loop after the commit of 1000, but not too sure how to implement or know if that will reduce performance even more.
DECLARE
CURSOR CLEAN IS
SELECT EMP_ID, ACCT_ID FROM RECORDS_TO_DELETE F; --Table contains the records that needs to be deleted.
COUNTER INTEGER := 0;
BEGIN
FOR F IN CLEAN LOOP
COUNTER := COUNTER + 1;
DELETE FROM EMPLOYEES
WHERE EMP_ID = F.EMP_ID AND ACCT_ID = F.ACCT_ID;
IF MOD(COUNTER, 1000) = 0 THEN
COMMIT;
END IF;
END LOOP;
COMMIT;
END;
You need to read a bit about BULK COLLECT statements in oracle. This is commonly considered as proper way working with large tables.
Example:
LOOP
FETCH c_delete BULK COLLECT INTO t_delete LIMIT l_delete_buffer;
FORALL i IN 1..t_delete.COUNT
DELETE ps_al_chk_memo
WHERE ROWID = t_delete (i);
COMMIT;
EXIT WHEN c_delete%NOTFOUND;
COMMIT;
END LOOP;
CLOSE c_delete;
You can do it in a single statement, this should be the fastest way in any kind:
DELETE FROM EMPLOYEES
WHERE (EMP_ID, ACCT_ID) =ANY (SELECT EMP_ID, ACCT_ID FROM RECORDS_TO_DELETE)
Since I can see the volume of record is not that much so can still go with SQL not by PLSQL.Whenever possible try SQL. I think it should not cause that much performance impact.
DELETE FROM EMPLOYEES
WHERE EXISTS
(SELECT 1 FROM RECORDS_TO_DELETE F
WHERE EMP_ID = F.EMP_ID
AND ACCT_ID= F.ACCT_ID);
Hope this helps.
I want to fetch around 6 millions rows from one table and insert them all into another table.
How do I do it using BULK COLLECT and FORALL ?
declare
-- define array type of the new table
TYPE new_table_array_type IS TABLE OF NEW_TABLE%ROWTYPE INDEX BY BINARY_INTEGER;
-- define array object of new table
new_table_array_object new_table_array_type;
-- fetch size on bulk operation, scale the value to tweak
-- performance optimization over IO and memory usage
fetch_size NUMBER := 5000;
-- define select statment of old table
-- select desiered columns of OLD_TABLE to be filled in NEW_TABLE
CURSOR old_table_cursor IS
select * from OLD_TABLE;
BEGIN
OPEN old_table_cursor;
loop
-- bulk fetch(read) operation
FETCH old_table_cursor BULK COLLECT
INTO new_table_array_object LIMIT fetch_size;
EXIT WHEN old_table_cursor%NOTFOUND;
-- do your business logic here (if any)
-- FOR i IN 1 .. new_table_array_object.COUNT LOOP
-- new_table_array_object(i).some_column := 'HELLO PLSQL';
-- END LOOP;
-- bulk Insert operation
FORALL i IN INDICES OF new_table_array_object SAVE EXCEPTIONS
INSERT INTO NEW_TABLE VALUES new_table_array_object(i);
COMMIT;
END LOOP;
CLOSE old_table_cursor;
End;
Hope this helps.
oracle
Below is an example From
CREATE OR REPLACE PROCEDURE fast_way IS
TYPE PartNum IS TABLE OF parent.part_num%TYPE
INDEX BY BINARY_INTEGER;
pnum_t PartNum;
TYPE PartName IS TABLE OF parent.part_name%TYPE
INDEX BY BINARY_INTEGER;
pnam_t PartName;
BEGIN
SELECT part_num, part_name
BULK COLLECT INTO pnum_t, pnam_t
FROM parent;
FOR i IN pnum_t.FIRST .. pnum_t.LAST
LOOP
pnum_t(i) := pnum_t(i) * 10;
END LOOP;
FORALL i IN pnum_t.FIRST .. pnum_t.LAST
INSERT INTO child
(part_num, part_name)
VALUES
(pnum_t(i), pnam_t(i));
COMMIT;
END
The SQL engine parse and executes the SQL Statements but in some cases ,returns data to the PL/SQL engine.
During execution a PL/SQL statement, every SQL statement cause a context switch between the two engine. When the PL/SQL engine find the SQL statement, it stop and pass the control to SQL engine. The SQL engine execute the statement and returns back to the data in to PL/SQL engine. This transfer of control is call Context switch. Generally switching is very fast between PL/SQL engine but the context switch performed large no of time hurt performance .
SQL engine retrieves all the rows and load them into the collection and switch back to PL/SQL engine. Using bulk collect multiple row can be fetched with single context switch.
Example : 1
DECLARE
Type stcode_Tab IS TABLE OF demo_bulk_collect.storycode%TYPE;
Type category_Tab IS TABLE OF demo_bulk_collect.category%TYPE;
s_code stcode_Tab;
cat_tab category_Tab;
Start_Time NUMBER;
End_Time NUMBER;
CURSOR c1 IS
select storycode,category from DEMO_BULK_COLLECT;
BEGIN
Start_Time:= DBMS_UTILITY.GET_TIME;
FOR rec in c1
LOOP
NULL;
--insert into bulk_collect_a values(rec.storycode,rec.category);
END LOOP;
End_Time:= DBMS_UTILITY.GET_TIME;
DBMS_OUTPUT.PUT_LINE('Time for Standard Fetch :-' ||(End_Time-Start_Time) ||' Sec');
Start_Time:= DBMS_UTILITY.GET_TIME;
Open c1;
FETCH c1 BULK COLLECT INTO s_code,cat_tab;
Close c1;
FOR x in s_code.FIRST..s_code.LAST
LOOP
null;
END LOOP;
End_Time:= DBMS_UTILITY.GET_TIME;
DBMS_OUTPUT.PUT_LINE('Using Bulk collect fetch time :-' ||(End_Time-Start_Time) ||' Sec');
END;
CREATE OR REPLACE PROCEDURE APPS.XXPPL_xxhil_wrmtd_bulk
AS
CURSOR cur_postship_line
IS
SELECT ab.process, ab.machine, ab.batch_no, ab.sales_ord_no, ab.spec_no,
ab.fg_item_desc brand_job_name, ab.OPERATOR, ab.rundate, ab.shift,
ab.in_qty1 input_kg, ab.out_qty1 output_kg, ab.by_qty1 waste_kg,
-- null,
xxppl_reports_pkg.cf_waste_per (ab.org_id,ab.process,ab.out_qty1,ab.by_qty1,ab.batch_no,ab.shift,ab.rundate) waste_percentage,
ab.reason_desc reasons, ab.cause_desc cause,
to_char(to_date(ab.rundate),'MON')month,
to_char(to_date(ab.rundate),'yyyy')year,
xxppl_org_name (ab.org_id) plant
FROM (SELECT a.org_id,
(SELECT so_line_no
FROM xx_ppl_logbook_h x
WHERE x.batch_no = a.batch_no
AND x.org_id = a.org_id) so_line_no,
(SELECT spec_no
FROM xx_ppl_logbook_h x
WHERE x.batch_no = a.batch_no
AND x.org_id = a.org_id
and x.SALES_ORD_NO=a.sales_ord_no
and x.OPRN_ID =a.OPRN_ID
and x.RESOURCES = a.RESOURCES) SPEC_NO,
(SELECT OPERATOR
FROM xx_ppl_logbook_f y, xx_ppl_logbook_h z
WHERE y.org_id = a.org_id
AND y.batch_no = a.batch_no
AND y.rundate = a.rundate
AND a.process = y.process
AND a.resources = y.resources
AND a.oprn_id = y.oprn_id
AND z.org_id = y.org_id
AND z.batch_no = y.batch_no
AND z.batch_id = z.batch_id
AND z.rundate = y.rundate
AND z.process = y.process) OPERATOR,
a.batch_no, a.oprn_id, a.rundate, a.fg_item_desc,
a.resources, a.machine, a.sales_ord_no, a.process,
a.process_desc, a.tech_desc, a.sub_invt, a.by_qty1,
a.by_qty2, a.um1, a.um2, a.shift,
DECODE (shift, 'I', 1, 'II', 2, 'III', 3) shift_rank,
a.reason_desc, a.in_qty1, a.out_qty1, a.cause_desc
FROM xxppl_byproduct_waste_v a
WHERE 1 = 1
--AND a.org_id = (CASE WHEN :p_orgid IS NULL THEN a.org_id ELSE :p_orgid END)
AND a.rundate BETWEEN to_date('01-'||to_char(add_months(TRUNC(TO_DATE(sysdate,'DD-MON-RRRR')) +1, -2),'MON-RRRR'),'DD-MON-RRRR') AND SYSDATE
---AND a.process = (CASE WHEN :p_process IS NULL THEN a.process ELSE :p_process END)
) ab;
TYPE postship_list IS TABLE OF xxhil_wrmtd_tab%ROWTYPE;
postship_trns postship_list;
v_err_count NUMBER;
BEGIN
delete from xxhil_wrmtd_tab;
OPEN cur_postship_line;
FETCH cur_postship_line
BULK COLLECT INTO postship_trns;
FORALL i IN postship_trns.FIRST .. postship_trns.LAST SAVE EXCEPTIONS
INSERT INTO xxhil_wrmtd_tab
VALUES postship_trns (i);
CLOSE cur_postship_line;
COMMIT;
END;
/
---- INSERING DATA INTO TABLE WITH THE HELP OF CUROSR
Edit: Please answer one of the two answers I ask. I know there are other options that would be better in a different case. These other potential options (partitioning the table, running as one large delete statement w/o committing in batches, etc) are NOT options in my case due to things outside my control.
I have several very large tables to delete from. All have the same foreign key that is indexed. I need to delete certain records from all tables.
table source
id --primary_key
import_source --used for choosing the ids to delete
table t1
id --foreign key
--other fields
table t2
id --foreign key
--different other fields
Usually when doing a delete like this, I'll put together a loop to step through all the ids:
declare
my_counter integer := 0;
begin
for cur in (
select id from source where import_source = 'bad.txt'
) loop
begin
delete from source where id = cur.id;
delete from t1 where id = cur.id;
delete from t2 where id = cur.id;
my_counter := my_counter + 1;
if my_counter > 500 then
my_counter := 0;
commit;
end if;
end;
end loop;
commit;
end;
However, in some code I saw elsewhere, it was put together in separate loops, one for each delete.
declare
type import_ids is table of integer index by pls_integer;
my_count integer := 0;
begin
select id bulk collect into my_import_ids from source where import_source = 'bad.txt'
for h in 1..my_import_ids.count
delete from t1 where id = my_import_ids(h);
--do commit check
end loop;
for h in 1..my_import_ids.count
delete from t2 where id = my_import_ids(h);
--do commit check
end loop;
--do commit check will be replaced with the same chunk to commit every 500 rows as the above query
So I need one of the following answered:
1) Which of these is better?
2) How can I find out which is better for my particular case? (IE if it depends on how many tables I have, how big they are, etc)
Edit:
I must do this in a loop due to the size of these tables. I will be deleting thousands of records from tables with hundreds of millions of records. This is happening on a system that can't afford to have the tables locked for that long.
EDIT:
NOTE: I am required to commit in batches. The amount of data is too large to do it in one batch. The rollback tables will crash our database.
If there is a way to commit in batches other than looping, I'd be willing to hear it. Otherwise, don't bother saying that I shouldn't use a loop...
Why loop at all?
delete from t1 where id IN (select id from source where import_source = 'bad.txt';
delete from t2 where id IN (select id from source where import_source = 'bad.txt';
delete from source where import_source = 'bad.txt'
That's using standard SQL. I don't know Oracle specifically, but many DBMSes also feature multi-table JOIN-based DELETEs as well that would let you do the whole thing in a single statement.
David,
If you insist on commiting, you can use the following code:
declare
type import_ids is table of integer index by pls_integer;
my_import_ids import_ids;
cursor c is select id from source where import_source = 'bad.txt';
begin
open c;
loop
fetch c bulk collect into my_import_ids limit 500;
forall h in 1..my_import_ids.count
delete from t1 where id = my_import_ids(h);
forall h in 1..my_import_ids.count
delete from t2 where id = my_import_ids(h);
commit;
exit when c%notfound;
end loop;
close c;
end;
This program fetches ids by pieces of 500 rows, deleting and commiting each piece. It should be much faster then row-by-row processing, because bulk collect and forall works as a single operation (in a single round-trip to and from database), thus minimizing the number of context switches. See Bulk Binds, Forall, Bulk Collect for details.
First of all, you shouldn't commit in the loop - it is not efficient (generates lots of redo) and if some error occurrs, you can't rollback.
As mentioned in previous answers, you should issue single deletes, or, if you are deleting most of the records, then it could be more optimal to create new tables with remaining rows, drop old ones and rename the new ones to old names.
Something like this:
CREATE TABLE new_table as select * from old_table where <filter only remaining rows>;
index new_table
grant on new table
add constraints on new_table
etc on new_table
drop table old_table
rename new_table to old_table;
See also Ask Tom
Larry Lustig is right that you don't need a loop. Nonetheless there may be some benefit in doing the delete in smaller chunks. Here PL/SQL bulk binds can improve speed greatly:
declare
type import_ids is table of integer index by pls_integer;
my_count integer := 0;
begin
select id bulk collect into my_import_ids from source where import_source = 'bad.txt'
forall h in 1..my_import_ids.count
delete from t1 where id = my_import_ids(h);
forall h in 1..my_import_ids.count
delete from t2 where id = my_import_ids(h);
The way I wrote it it does it all at once, in which case yeah the single SQL is better. But you can change your loop conditions to break it into chunks. The key points are
don't commit on every row. If anything, commit only every N rows.
When using chunks of N, don't run the delete in an ordinary loop. Use forall to run the delete as a bulk bind, which is much faster.
The reason, aside from the overhead of commits, is that each time you execute an SQL statement inside PL/SQL code it essentially does a context switch. Bulk binds avoid that.
You may try partitioning anyway to use parallel execution, not just to drop one partition. The Oracle documentation may prove useful in setting this up. Each partition would use it's own rollback segment in this case.
If you are doing the delete from the source before the t1/t2 deletes, that suggests you don't have referential integrity constraints (as otherwise you'd get errors saying child records exist).
I'd go for creating the constraint with ON DELETE CASCADE. Then a simple
DECLARE
v_cnt NUMBER := 1;
BEGIN
WHILE v_cnt > 0 LOOP
DELETE FROM source WHERE import_source = 'bad.txt' and rownum < 5000;
v_cnt := SQL%ROWCOUNT;
COMMIT;
END LOOP;
END;
The child records would get deleted automatically.
If you can't have the ON DELETE CASCADE, I'd go with a GLOBAL TEMPORARY TABLE with ON COMMIT DELETE ROWS
DECLARE
v_cnt NUMBER := 1;
BEGIN
WHILE v_cnt > 0 LOOP
INSERT INTO temp (id)
SELECT id FROM source WHERE import_source = 'bad.txt' and rownum < 5000;
v_cnt := SQL%ROWCOUNT;
DELETE FROM t1 WHERE id IN (SELECT id FROM temp);
DELETE FROM t2 WHERE id IN (SELECT id FROM temp);
DELETE FROM source WHERE id IN (SELECT id FROM temp);
COMMIT;
END LOOP;
END;
I'd also go for the largest chunk your DBA will allow.
I'd expect each transaction to last for at least a minute. More frequent commits would be a waste.
This is happening on a system that
can't afford to have the tables locked
for that long.
Oracle doesn't lock tables, only rows. I'm assuming no-one will be locking the rows you are deleting (or at least not for long). So locking is not an issue.