Performance improvement of plsql using Bulk collect - sql

I am using bulk collect to improve execution time. When I do not use bulk collect, it executes in 4 mins.
But when I use bulk collect there is no output, neither the error message is shown in console. I can see a blank spool file created.
Please let me know if I have utilized bulk collect incorrectly, also can we use this clause in select statement with limit?
Table consists of maximum 1 million records.
SET SERVEROUTPUT ON FORMAT WRAPPED
SET VERIFY OFF
SET FEEDBACK OFF
SET TERMOUT OFF
SPOOL C:\Temp\spool_1.txt
DECLARE
cursor c2 is (
select count(distinct e.cdb_pref_event_id)
,e.supp_cd
from (select distinct eh.cdb_customer_id cdb_customer_id
,eh.cdb_pref_event_id cdb_pref_event_id
,eh.supp_cd supp_cd
from (select *
from cdb_stg.cpm_pref_event_stg_arc
where trunc(load_date) = trunc(sysdate - 1)) eh
Left outer join cdb_admin.cpm_pref_result er on (eh.cdb_customer_id =
er.cdb_customer_id and
eh.cdb_pref_event_id =
er.cdb_pref_event_id)
where er.cdb_pref_event_id is null
and er.cdb_customer_id is null) r
join cdb_admin.cpm_pref_event_exception e on (r.cdb_customer_id =
e.cdb_customer_id and
r.cdb_pref_event_id =
e.cdb_pref_event_id)
group by e.supp_cd);
TYPE totalprefresults is table of NUMBER(20);
TYPE supcd_1 is table of cdb_admin.cpm_pref_event_stg.supp_cd%TYPE;
total_prefresults totalprefresults;
supcd1 supcd_1;
--Total_prefresults NUMBER(20);
--SUPCD1 CDB_ADMIN.CPM_PREF_EVENT_STG.supp_cd%TYPE;
profile_counts NUMBER(20);
iter Integer := 0;
BEGIN
select count(distinct cdb_customer_id)
into profile_counts
from cdb_admin.cpm_pref_event_exception h
where cdb_customer_id in
(Select distinct e.cdb_customer_id
from (Select distinct eh.cdb_customer_id cdb_customer_id
,eh.cdb_pref_event_id cdb_pref_event_id
,eh.supp_cd supp_cd
from (select *
from cdb_stg.cpm_pref_event_stg_arc
where trunc(load_date) = trunc(sysdate - 1)) eh
Left outer join cdb_admin.cpm_pref_result er on (eh.cdb_customer_id =
er.cdb_customer_id and
eh.cdb_pref_event_id =
er.cdb_pref_event_id)
where er.cdb_pref_event_id is null
and er.cdb_customer_id is null) r
join cdb_admin.cpm_pref_event_exception e on (r.cdb_customer_id =
e.cdb_customer_id and
r.cdb_pref_event_id =
e.cdb_pref_event_id)
where e.supp_cd = 'PROFILE-NOT-FOUND')
and h.supp_cd != 'PROFILE-NOT-FOUND';
dbms_output.put_line('TOTAL EVENTS VALIDATION');
dbms_output.put_line('-------------------------------------------------------------');
dbms_output.put_line('');
dbms_output.put_line(rpad('Pref_Counts', 25) || rpad('Supp_CD', 25));
OPEN c2;
LOOP
FETCH c2 BULK COLLECT
INTO total_prefresults
,supcd1 limit 100;
EXIT WHEN c2%NOTFOUND;
dbms_output.put_line(rpad(total_prefresults, 25) || rpad(supcd1, 25));
IF (supcd1 = 'PROFILE-NOT-FOUND')
then
dbms_output.put_line('');
dbms_output.put_line('Profile not found records count : ' ||
total_prefresults);
dbms_output.put_line(profile_counts ||
' : counts moved to other exceptions ');
dbms_output.put_line((total_prefresults - profile_counts) ||
' : are still in Profile_not_found exception');
END IF;
iter := iter + 1;
END LOOP;
CLOSE c2;
dbms_output.put_line('');
dbms_output.put_line('Number of missing Records: ' || iter);
END;
/
SPOOL OFF

I think the bottleneck is this condition: where trunc(load_date) = trunc(sysdate - 1)
Do you haven an index on trunc(load_date)? Either create a function-based index on trunc(load_date) or if you already have an index on load_date then try
WHERE load_date >= trunc(sysdate - 1) AND load_date < trunc(sysdate)
Also check your queries whether distinct is really needed. Remove them, if possible.

I have reframed your code starting from OPEN c2; to CLOSE c2;
BULK COLLECT should be executed to store all the data in the collection only once(in one go) and then this collection can be used using the index(i.e. I in the following case) in FOR loop as follows:
OPEN C2;
FETCH C2 BULK COLLECT INTO
TOTAL_PREFRESULTS,
SUPCD1;
--EXIT WHEN C2%NOTFOUND;
CLOSE C2;
-- To list down all the values before processing the logic
FOR I IN TOTAL_PREFRESULTS.FIRST..TOTAL_PREFRESULTS.LAST LOOP
DBMS_OUTPUT.PUT_LINE(RPAD(TOTAL_PREFRESULTS(I), 25)
|| RPAD(SUPCD1(I), 25));
END LOOP;
FOR I IN TOTAL_PREFRESULTS.FIRST..TOTAL_PREFRESULTS.LAST LOOP
IF ( SUPCD1(I) = 'PROFILE-NOT-FOUND' ) THEN
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('Profile not found records count : ' || TOTAL_PREFRESULTS(I));
DBMS_OUTPUT.PUT_LINE(PROFILE_COUNTS || ' : counts moved to other exceptions ');
DBMS_OUTPUT.PUT_LINE((TOTAL_PREFRESULTS(I) - PROFILE_COUNTS)
|| ' : are still in Profile_not_found exception');
END IF;
ITER := ITER + 1;
END LOOP;
replace above code snippet in your code and try to execute.
Refer the guide to use BULK COLLECT
Cheers!!

Bulk collect can provide considerable performance gain. However there are a couple gotchas involved.
First off there is the difference in the meaning of %notfound.
On a standard cursor %notfound means all rows have already been fetched and there is no more. With bulk collect this changes to 'there were insufficient rows to reach the LIMIT specified (if present).
That does NOT mean there no rows fetched just the limit specified was not reached. For example if your limit is 100 and the fetch retrieved only 50 then %notfound would return True. This is where the referenced guide fails.
The second is what happens without the limit clause: All rows from the cursor are returned into shared memory(PGA?). So what's the problem with that.
If there 100 rows or 1000 then most likely you're k, but suppose there are 100,000 or 1M rows, they are still all loaded into memory. Finally (at least for now) when the limit clause is used then the entire Fetch+Process must itself be enclosed within a loop or you process only the first fetch - meaning only the specified limit number of rows - no matter how many actually exist. Another point where the referenced guide fails.
The following skeleton accommodates the above.
declare
max_bulk_rows constant integer := 1000; -- define the max number of rows for each fetch ...
cursor c_bulk is(
Select ... ;
type bulk_row_t is table of c_bulk%rowtype;
bulk_row bulk_row_t;
Begin
open c_bulk;
loop
fetch c_bulk -- fill buffer
bulk collect into bulk_row
limit max_bulk_row;
for i in bulk_row.first .. bulk_row.last -- process each row in buffer
loop
"process individual row here"
end loop;
foreach ... -- bulk output of rows here is needed.
exit when bulk_row.count < max_bulk_row; -- exit process loop if all rows processed
end loop ; -- loop back and fetch next buffer if needed
close c_bulk;
...
end;

Related

Cursor in Oracle - after passing parameters, select does not filter result rows

I am facing for me strange issue with following parametrical cursor.
I have defined cursor in this way:
CURSOR cur_action ( product_code VARCHAR2(100) , action_master_list VARCHAR2(100))
IS
SELECT
act.ACTION_DETAIL_KEY,
act.ACTION_MASTER_KEY,
act.PRODUCT_CODE,
act.REF_ACTION_DETAIL_KEY
FROM XMLTABLE(action_master_list) x
JOIN ETDW.MFE_AR_ACTION_DETAILS act ON TO_NUMBER(x.COLUMN_VALUE) = act.ACTION_MASTER_KEY
WHERE 1=1
AND act.LAST_FLAG = 'Y'
AND act.PRODUCT_CODE = product_code;
Then I am using it in following way:
OPEN cur_action ( iFromProductCode , iActionMasterKeyList);
LOOP
FETCH cur_action BULK COLLECT INTO vActionDetailKey, vActionMasterKey, vProductCode, vRefActionDetailKey LIMIT 100;
FOR j IN 1..cur_action%ROWCOUNT
LOOP
dbms_output.put_line('vActionDetailKey: ' || vActionDetailKey (j) ||' vActionMasterKey: '|| vActionMasterKey (j) || ' vProductCode: ' || vProductCode (j));
END LOOP;
END LOOP;
Result seems to be unfilterd. It doesnt return 3 rows as expected result (this result is returned in with cusor query, when i run in outside procedure/pl block), but it returns all rows for actions in list. So it seems, that WHERE condition "act.PRODUCT_CODE = product_code" was not applied. Why?
Thank you
Why? Because you named parameter the same as column, so Oracle reads it as if it was where 1 = 1, i.e. no filtering at all.
Rename parameters to e.g.
CURSOR cur_action ( par_product_code V
----
this
and, later,
AND act.PRODUCT_CODE = par_product_code;
----
this

How to use two columns with clause definescore oracle text

I have this code:
declare
sName varchar(25);
iRank number := 0;
sDesc varchar(510);
cursor q is
SELECT *
FROM trec_topics ORDER BY num;
BEGIN
for ql in q
loop
sDesc := replace(replace(replace(ql.title, '?', '{?}'), ')', '{)}'), '(', '{(}');
--dbms_output.put_line(ql.num||'-'||sDesc);
declare
cursor c is
SELECT /*+ FIRST_ROWS(100) */ docno,
CASE
WHEN SCORE(10) >= SCORE(20) THEN SCORE(10)
ELSE SCORE(20)
END AS SCORE
FROM txt_search_docs WHERE CONTAINS(txt, 'DEFINESCORE(ql.title, OCCURRENCE)', 10) > 0 OR
CONTAINS(txt, 'DEFINESCORE(sDesc, OCCURRENCE)', 20) > 0
order by SCORE desc;
begin
iRank := 1;
for c1 in c
loop
dbms_output.put_line(ql.num||' Q0 '||c1.docno||' '||lpad(iRank,3, '0')||' '||lpad(c1.score, 2, '0')||' myUser');
iRank := iRank + 1;
exit when c%rowcount = 100;
end loop;
end;
end loop;
end;
As you can see I'm doing select on two different tables, however, I need to change the standard score, as it did not perform well. I'm trying to use the DEFINESCORE clause that has this 'DEFINESCORE (query_term, scoring_expression)' format.
How can I call the table columns within this clause? That is, I need to call my columns instead of "query_term", as there are several documents to do the search. Because the way I’m calling him, he’s looking for exactly the term ql.title
Anyone a suggestion to help me with this problem?
I finally managed to solve it.
It was about:
create a variable: topics varchar (525);
store the column value: topics := replace(replace(replace(ql.title, '?', '{?}'), ')', '{)}'), '(', '{(}');
and after calling it in the CONTAINS clause: FROM txt_search_docs WHERE CONTAINS(txt, 'DEFINESCORE(('''||topics||'''), OCCURRENCE)', 1) > 0

Database PL/SQL

I have to make an assignment for school but I get two errors:
Encountered the symbol "FETCH" when expecting on of the following:
constant exception <an identifier> <a double-quoted
delimited-identifier> table LONG_ double ref char time timestamp
interval date binary national character nchar
and
Encountered the symbol "end-of-file" when expecting one of the
following: end not pragma final instantiable order overriding static
member constructor map
Here is the link to my code: http://pastebin.com/h4JN9YQY
CREATE OR REPLACE PROCEDURE generate_bonus
AS
cursor student_info is
select distinct students.id,
events.begindatetime,
events.enddatetime,
count(items.number_of_coupons) as coupons_collected,
events.type from students
join applies on applies.students_id = students.id
join schedules on schedules.id = applies.schedules_id
join events on events.id = schedules.events_id
join orders on orders.students_id = students.id
join orderitems on orderitems.orders_id = orders.id
join items on items.id = orderitems.items_id
join bars on bars.id = orders.bars_id
where applies.status = 'PLANNED'
and orderitems."NUMBER" is not null
and bars.name is not null
group by students.id, events.begindatetime, events.enddatetime, events.type
order by students.id;
BEGIN
DECLARE
s_id integer(256);
s_beginDate date;
s_endDate date;
s_noCoupons number(256);
s_eventType varchar2(256);
s_workedHours number(24) := 8;
calculated_bonus number(256);
count_rows integer(256);
OPEN student_info;
LOOP
FETCH student_info into s_id, s_beginDate, s_endDate, s_noCoupons, s_eventType;
Select count(*) into count_rows from student_bonus where students_id = s_id and rownum <= 1;
EXIT WHEN count_rows = 1;
IF (s_eventType = 'ROUGH') THEN
calculated_bonus := s_workedHours * (s_workedHours / 100 * 7) * s_noCoupons;
INSERT INTO student_bonus(students_id, bonus, events_id) VALUES (s_id, calculated_bonus, s_eventType);
calculated_bonus := 0;
ELSIF (s_eventType = 'NORMAL') THEN
calculated_bonus := s_workedHours * (s_workedHours / 100 * 4) * s_noCoupons;
INSERT INTO student_bonus(students_id, bonus, events_id) VALUES (s_id, calculated_bonus, s_eventType);
calculated_bonus := 0;
ELSE
calculated_bonus := s_workedHours * (s_workedHours / 100 * 2) * s_noCoupons;
INSERT INTO student_bonus(students_id, bonus, events_id) VALUES (s_id, calculated_bonus, s_eventType);
calculated_bonus := 0;
END IF;
END LOOP;
CLOSE student_info;
END generate_bonus;
In my opinion much easier for juniors to use is cursor loop, in this project You will avoid this kind of errors. Syntax like:
FOR row_variable IN cursor LOOP
dbms_output.put_line(row_variable.id);
END LOOP;
row_variable holds valeus from each cursor row, and you can easily acces it with '.' (dot) operator like row_variable.id
Using cursor loop lets You avoid problems with fetching data, taking care about open/close cursor and worry about output outside cursor space.
The loop will make loops exactly how many items cursor is pointing, like for each loop.
put this line:
EXIT WHEN student_info%NOTFOUND;
after this line:
FETCH student_info into s_id, s_beginDate, s_endDate, s_noCoupons, s_eventType;
You had reached the end of your cursor but there is no code which tells it to exit.
Cursor should follow steps in following manner..
Open Cursor
LOOP
FETCH cursor
EXIT Cursor condtion
{--
Your rest of the code here
--}
end loop;

How to dynamically store the value of a loop and return it once the for loop breaks?

I have a code something like this
FOR K IN (SELECT E.COLUMN_VALUE
FROM TABLE (SELECT CAST(LEAVE_HOLIDAY_CAL_PKG_NEW.GES_LEV_COLUMN_TO_ROWS_FNC(V_CAL_SUBTSR,
',') AS
LEV_TABLE_OF_VARCHAR_TYP)
FROM DUAL) E) LOOP
V_CAL_DESC := CASE WHEN K.COLUMN_VALUE = 'W' THEN 'Project Weekend-' || TO_CHAR((V_DATE1 + V_ITERATION), 'Day') WHEN K.COLUMN_VALUE = 'H' THEN 'Holiday' ELSE 'Working day' END;
IF K.COLUMN_VALUE = 'H' THEN
FETCH C_HOLIDAY_CURSR
INTO V_HOLIDAY_ID;
SELECT H.HOLIDAY_DESC
INTO V_CAL_DESC
FROM LEAVE.LEV_NEW_EMP_HOLIDAY_DTLS H
WHERE H.HOLIDAY_ID = V_HOLIDAY_ID;
END IF;
INSERT INTO LEV_CAL_TEMP_TBL
VALUES
(IN_PERSON_ID, K.COLUMN_VALUE, V_DATE1 + V_ITERATION, V_CAL_DESC);
COMMIT;
V_ITERATION := V_ITERATION + 1;
END LOOP;
OPEN C_CAL_CURSR FOR
select *from LEV_CAL_TEMP_TBL;
So every time this for loop runs one row is inserted in the table GES_LEV_CAL_TEMP_TBL.
But I wanted to avoid this as this might affect the performance of the system,So is there any way I can store those value somewhere
and return the same in cursor once the FOR loop ends.
Thanks a lot in advance.
A cursor FOR LOOP is row-by-row a.k.a slow-by-slow process.
If you must do it in PL/SQL, then use the BULK COLLECT and FORALL statement.
Declare a collection type:
For example,
TYPE t_lev_cal_temp_tbl IS TABLE OF lev_cal_temp_tbl%ROWTYPE;
l_lev_cal t_lev_cal_temp_tbl := t_lev_cal_temp_tbl();
FORALL insert statement:
For example,
FORALL i IN l_lev_cal.first .. l_lev_cal.last
INSERT INTO lev_cal_temp_tbl VALUES l_tab(i);

ORA-38101: Invalid column in the INSERT VALUES Clause:

So i'm in the middle of testing the accuracy of my normalized data by creating a tool to de-normalize the data for comparison. While doing this i was looking into learning new techniques for this tool over what i normally would do(which was using cursors and looping through an insert/update) So i came across two items i wanted to try which were bulk collections and the merge statement. My problem is i'm having some trouble finding the best way to utilize the bulk collection.
EDIT:
Okay so i found my problem/solution when it came to the bulk collection. It was in fact the way i was fetching it. Instead of using the forall statement i changed it to a for and added a loop underneath it. Which lead to the discoveries of more bugs. The way i was trying to call the values stored in the indx was being done wrong so i've rectified that. Now the only problem i seem to be having is with the error noted in the title. In my merge for some reason the first value i try to use in the insert throws the following error:
PL/SQL: ORA-38101: Invalid column in the INSERT VALUES Clause: "TMI"."MACHINE_INTERVAL_ID"
ORA-06550: line 92, column 7:
So what i would like to know now is why exactly i'm getting this error. I understand the concept that my insert value is invalid. but i do not fully understand why this is so.
This is the merge statement in question:
MERGE INTO TEST_MACHINE_INTERVAL TMI
USING (SELECT * FROM TEST_MACHINE_INTERVAL) OTMI
ON (TMI.MACHINE_INTERVAL_ID = OTMI.MACHINE_INTERVAL_ID)
WHEN MATCHED THEN
UPDATE SET TMI.MACHINE_INTERVAL_ID = OTMI.MACHINE_INTERVAL_ID, TMI.START_DATE_TIME = OTMI.START_DATE_TIME,
TMI.INTERVAL_DURATION = OTMI.INTERVAL_DURATION, TMI.CALC_END_TIME = OTMI.CALC_END_TIME,
TMI.MACHINE_NAME = OTMI.MACHINE_NAME, TMI.SITE_NAME = OTMI.SITE_NAME,
TMI.OPERATOR_INSTANCE = OTMI.OPERATOR_INSTANCE, TMI.OPERATOR_INSTANCE2 = OTMI.OPERATOR_INSTANCE2,
TMI.OPERATOR_INSTANCE3 = OTMI.OPERATOR_INSTANCE3, TMI.SHIFT_NAME = OTMI.SHIFT_NAME,
TMI.INTERVAL_CATEGORY = OTMI.INTERVAL_CATEGORY, TMI.NTP_CATEGORY_NAME = OTMI.NTP_CATEGORY_NAME,
TMI.MACHINE_MODE = OTMI.MACHINE_MODE, TMI.JOB_LOAD_STATE_NAME = OTMI.JOB_LOAD_STATE_NAME,
TMI.RAW_SOURCE_MSG_TYPE = OTMI.RAW_SOURCE_MSG_TYPE
WHEN NOT MATCHED THEN
INSERT (TMI.MACHINE_INTERVAL_ID, TMI.START_DATE_TIME, TMI.INTERVAL_DURATION, TMI.CALC_END_TIME,
TMI.MACHINE_NAME, TMI.SITE_NAME, TMI.OPERATOR_INSTANCE, TMI.OPERATOR_INSTANCE2,
TMI.OPERATOR_INSTANCE3, TMI.SHIFT_NAME, TMI.INTERVAL_CATEGORY, TMI.NTP_CATEGORY_NAME,
TMI.MACHINE_MODE, TMI.JOB_LOAD_STATE_NAME, TMI.RAW_SOURCE_MSG_TYPE )
VALUES (MACHINE_INTERVAL_ID, START_DATE_TIME, INTERVAL_DURATION, CALC_END_TIME,
MACHINE_NAME, SITE_NAME, OPERATOR_INSTANCE, OPERATOR_INSTANCE2, OPERATOR_INSTANCE3,
SHIFT_NAME, INTERVAL_CATEGORY, NTP_CATEGORY_NAME, MACHINE_MODE,
JOB_LOAD_STATE_NAME, RAW_SOURCE_MSG_TYPE);
below is the full version of my newly modified code:
-- Denormaliztion of machine_interval Table.
-- Is used to take all intervals from interval_table and convert it from
-- foreign keys to corresponding names.
DECLARE
START_DATE_TIME TIMESTAMP(6) WITH TIME ZONE;
CALC_END_TIME TIMESTAMP(6) WITH TIME ZONE;
MACHINE_NAME VARCHAR2(256);
SITE_NAME VARCHAR2(256);
OPERATOR_INSTANCE VARCHAR2(256);
OPERATOR_INSTANCE2 VARCHAR2(256);
OPERATOR_INSTANCE3 VARCHAR2(256);
SHIFT_NAME VARCHAR2(256);
INTERVAL_CATEGORY VARCHAR2(256);
NPT_CATEGORY_NAME VARCHAR2(256);
MACHINE_MODE VARCHAR2(256);
JOB_LOAD_STATE_NAME VARCHAR2(256);
RAW_SOURCE_MSG_TYPE VARCHAR2(256);
INTERVAL_DURATION NUMBER;
MACHINE_INTERVAL_ID NUMBER;
--step one: Get all the intervals and store them into a cursor
CURSOR INTERVAL_CUR IS
SELECT *
FROM MACHINE_INTERVAL
ORDER BY START_DATE_TIME ASC;
TYPE TOTAL_MACHINE_INTERVALS IS
TABLE OF interval_cur%rowtype
INDEX BY PLS_INTEGER;
MACHINE_INTERVAL_ROW TOTAL_MACHINE_INTERVALS;
BEGIN
--step two: Make sure Test_Machine_interval is empty.
DELETE FROM TEST_MACHINE_INTERVAL;
OPEN INTERVAL_CUR;
LOOP
FETCH INTERVAL_CUR BULK COLLECT INTO MACHINE_INTERVAL_ROW LIMIT 100;
--step three: Loop through all the intervals.
FOR INDX IN 1..MACHINE_INTERVAL_ROW.COUNT
LOOP
--step four: Gather all datavalues needed to populate test_machine_interval.
MACHINE_INTERVAL_ID := MACHINE_INTERVAL_ROW(indx).MACHINE_INTERVAL_ID;
START_DATE_TIME := MACHINE_INTERVAL_ROW(indx).START_DATE_TIME;
CALC_END_TIME := MACHINE_INTERVAL_ROW(indx).CALC_END_TIME;
INTERVAL_DURATION := MACHINE_INTERVAL_ROW(indx).INTERVAL_DURATION;
INTERVAL_CATEGORY := MACHINE_INTERVAL_ROW(indx).INTERVAL_CATEGORY;
RAW_SOURCE_MSG_TYPE := MACHINE_INTERVAL_ROW(indx).RAW_SOURCE_MSG_TYPE;
SELECT M.MACHINE_NAME INTO MACHINE_NAME
FROM MACHINE M
WHERE MACHINE_ID = MACHINE_INTERVAL_ROW(indx).MACHINE_ID;
SELECT S.SITE_NAME INTO SITE_NAME
FROM SITE S
LEFT OUTER JOIN MACHINE M ON M.SITE_ID = S.SITE_ID
WHERE M.MACHINE_ID = MACHINE_INTERVAL_ROW(indx).MACHINE_ID;
SELECT O.OPERATOR_NAME INTO OPERATOR_INSTANCE
FROM OPERATOR_INSTANCE OI
LEFT OUTER JOIN OPERATOR O ON OI.OPERATOR_ID = O.OPERATOR_ID
WHERE OI.OPERATOR_INSTANCE_ID = MACHINE_INTERVAL_ROW(indx).OPERATOR_INSTANCE_ID;
SELECT O.OPERATOR_NAME INTO OPERATOR_INSTANCE2
FROM OPERATOR_INSTANCE OI
LEFT OUTER JOIN OPERATOR O ON OI.OPERATOR_ID = O.OPERATOR_ID
WHERE OI.OPERATOR_INSTANCE_ID = MACHINE_INTERVAL_ROW(indx).OPERATOR_INSTANCE_ID_2;
SELECT O.OPERATOR_NAME INTO OPERATOR_INSTANCE3
FROM OPERATOR_INSTANCE OI
LEFT OUTER JOIN OPERATOR O ON OI.OPERATOR_ID = O.OPERATOR_ID
WHERE OI.OPERATOR_INSTANCE_ID = MACHINE_INTERVAL_ROW(indx).OPERATOR_INSTANCE_ID_3;
SELECT NPT_CATEGORY_NAME INTO NPT_CATEGORY_NAME
FROM NPT_CATEGORY
WHERE NPT_CATEGORY_ID = MACHINE_INTERVAL_ROW(indx).NPT_CATEGORY_ID;
SELECT S.SHIFT_NAME INTO SHIFT_NAME
FROM SHIFTS S
LEFT OUTER JOIN SHIFT_TBL STBL ON S.SHIFT_ID = STBL.SHIFT_NAME_FK
WHERE STBL.SHIFT_ID_PK = MACHINE_INTERVAL_ROW(indx).SHIFT_ID;
SELECT MACHINE_MODE_NAME INTO MACHINE_MODE
FROM MACHINE_MODE MM
WHERE MM.MACHINE_MODE_ID = MACHINE_INTERVAL_ROW(indx).MACHINE_MODE_ID;
SELECT JLS.JOB_LOAD_STATE_NAME INTO JOB_LOAD_STATE_NAME
FROM JOB_LOAD_STATE JLS
WHERE JLS.JOB_LOAD_STATE_ID = MACHINE_INTERVAL_ROW(indx).JOB_LOAD_STATE_ID;
--step five: merge record into test_machine_interval.
MERGE INTO TEST_MACHINE_INTERVAL TMI
USING (SELECT * FROM TEST_MACHINE_INTERVAL) OTMI
ON (TMI.MACHINE_INTERVAL_ID = OTMI.MACHINE_INTERVAL_ID)
WHEN MATCHED THEN
UPDATE SET TMI.MACHINE_INTERVAL_ID = OTMI.MACHINE_INTERVAL_ID, TMI.START_DATE_TIME = OTMI.START_DATE_TIME,
TMI.INTERVAL_DURATION = OTMI.INTERVAL_DURATION, TMI.CALC_END_TIME = OTMI.CALC_END_TIME,
TMI.MACHINE_NAME = OTMI.MACHINE_NAME, TMI.SITE_NAME = OTMI.SITE_NAME,
TMI.OPERATOR_INSTANCE = OTMI.OPERATOR_INSTANCE, TMI.OPERATOR_INSTANCE2 = OTMI.OPERATOR_INSTANCE2,
TMI.OPERATOR_INSTANCE3 = OTMI.OPERATOR_INSTANCE3, TMI.SHIFT_NAME = OTMI.SHIFT_NAME,
TMI.INTERVAL_CATEGORY = OTMI.INTERVAL_CATEGORY, TMI.NTP_CATEGORY_NAME = OTMI.NTP_CATEGORY_NAME,
TMI.MACHINE_MODE = OTMI.MACHINE_MODE, TMI.JOB_LOAD_STATE_NAME = OTMI.JOB_LOAD_STATE_NAME,
TMI.RAW_SOURCE_MSG_TYPE = OTMI.RAW_SOURCE_MSG_TYPE
WHEN NOT MATCHED THEN
INSERT (TMI.MACHINE_INTERVAL_ID, TMI.START_DATE_TIME, TMI.INTERVAL_DURATION, TMI.CALC_END_TIME,
TMI.MACHINE_NAME, TMI.SITE_NAME, TMI.OPERATOR_INSTANCE, TMI.OPERATOR_INSTANCE2,
TMI.OPERATOR_INSTANCE3, TMI.SHIFT_NAME, TMI.INTERVAL_CATEGORY, TMI.NTP_CATEGORY_NAME,
TMI.MACHINE_MODE, TMI.JOB_LOAD_STATE_NAME, TMI.RAW_SOURCE_MSG_TYPE )
VALUES (MACHINE_INTERVAL_ID, START_DATE_TIME, INTERVAL_DURATION, CALC_END_TIME,
MACHINE_NAME, SITE_NAME, OPERATOR_INSTANCE, OPERATOR_INSTANCE2, OPERATOR_INSTANCE3,
SHIFT_NAME, INTERVAL_CATEGORY, NTP_CATEGORY_NAME, MACHINE_MODE,
JOB_LOAD_STATE_NAME, RAW_SOURCE_MSG_TYPE);
/*
EXECUTE IMMEDIATE 'INSERT INTO TEST_MACHINE_INTERVAL
(MACHINE_INTERVAL_ID, START_DATE_TIME, INTERVAL_DURATION, CALC_END_TIME,
MACHINE_NAME, SITE_NAME, OPERATOR_INSTANCE, OPERATOR_INSTANCE2,
OPERATOR_INSTANCE3, SHIFT_NAME, INTERVAL_CATEGORY, NTP_CATEGORY_NAME,
MACHINE_MODE,JOB_LOAD_STATE_NAME,RAW_SOURCE_MSG_TYPE )
VALUES(:1, :2, :3, :4, :5, :6, :7, :8, :9, :10, :11, :12, :13, :14, :15)'
USING MACHINE_INTERVAL_ID, START_DATE_TIME, INTERVAL_DURATION,
CALC_END_TIME, MACHINE_NAME, SITE_NAME,
OPERATOR_INSTANCE, OPERATOR_INSTANCE2, OPERATOR_INSTANCE3,
SHIFT_NAME, INTERVAL_CATEGORY, NTP_CATEGORY_NAME,
MACHINE_MODE,JOB_LOAD_STATE_NAME,RAW_SOURCE_MSG_TYPE;
*/
END LOOP;
EXIT WHEN MACHINE_INTERVAL_ROW.COUNT = 0;
END LOOP;
END;
I'm 75% sure that my problem lies in how i'm trying to fetch the bulk collection as displayed in the code above. So my question is: How exactly should i be fetching the value from a bulk collection to utilize with the merging of data?
And suggestions or comments are greatly appreciated. Thank you.
If you use FORALL, what needs to follow is a single SQL statement that you will pass the entire collection to. If you simply want to iterate over the elements in the collection, you'd use a FOR loop.
The syntax for referring to the n-th element of a collection is collection_name(index).column_name.
So, if you want to iterate over the elements in the collection one by one, you'd want something like
FOR indx IN MACHINE_INTERVAL_ROW.FIRST..MACHINE_INTERVAL_ROW.COUNT
LOOP
MACHINE_INTERVAL_ID := machine_interval_row(indx).MACHINE_INTERVAL_ID;
START_DATE_TIME := machine_interval_row(indx).START_DATE_TIME;
<<more code>>
END LOOP;
If you're going to refactor your code, though, I'm not sure what benefit you get from having a local variable MACHINE_INTERVAL_ID rather than just using machine_interval_row(indx).MACHINE_INTERVAL_ID. I'm also not sure why you're executing half a dozen separate SELECT statements each of which return a single row rather than writing one SELECT statement that joins together all these tables and populates whatever local variables you want.
Your MERGE is also going to be problematic-- it doesn't make sense for both the source and the destination of a MERGE to be the same table-- I would expect you to get an error that Oracle couldn't generate a stable set of rows if it tried to execute that statement. You could change the source of your query to be a query against DUAL that selected all the local variables you've populated, I guess, i.e.
MERGE INTO TEST_MACHINE_INTERVAL TMI
USING (SELECT machine_interval_row(indx).MACHINE_INTERVAL_ID,
machine_interval_row(indx).START_DATE_TIME
FROM dual) OTMI
ON (TMI.MACHIN_INTERVAL_ID = OTMI.MACHINE_INTERVAL_ID)
If TEST_MACHINE_INTERVAL is going to start off empty, though, it sounds like you'd be better off not using a MERGE, not using a BULK COLLECT and just writing an INSERT ... SELECT that pulled all the data you want to pull. Something like
INSERT INTO test_machine_interval( machine_interval_id,
start_date_time,
<<more columns>> )
SELECT machine_interval_id,
last_value(start_date_time) over (partition by machine_interval_id
order by start_date_time asc
rows between unbounded preceding
and unbounded following ) last_start_date_time,
<<more columns>>
FROM machine_interval