I have a Update query
UPDATE tablename
set column1 = 'value1', column2= 'value2', column3= 'value3'
where column4 = 'value4
I need the statement above modified to do:
a commit every 5,000 records
stop after a total of 500,000 rows have been updated.
Is it possible in oracle11g? How can we achieve it?
Wrote a SQL procedure:
DECLARE
fromCount number(10) := 0;
toCount number(10) := 0;
BEGIN
LOOP
toCount := fromCount + 5000;
UPDATE tablename
set column1 = 'value1', column2= 'value2', column3= 'value3'
where column4 = 'value4 AND ROWNUM > fromCount AND ROWNUM < toCount;
COMMIT;
IF toCount=500000 THEN
EXIT;
END IF;
END LOOP;
END;
It is taking more than 1 hour to execute it. How can i improve the performance of it?
You may use 'and rownum <= 500000' condition. But the problem is, you won't know which records are updated and which are not.
Related
I have been trying to create a function that intends to assign a value to a declared variable, and act accordingly based on that value.
I use EXECUTE format(<SQL statement>) for assigning the value to cnt.
CREATE OR REPLACE FUNCTION my_function() RETURNS TRIGGER AS $$
DECLARE
cnt bigint;
BEGIN
IF NEW.field1 = 'DECLINED' THEN
cnt := EXECUTE format('SELECT count(*) FROM table1 WHERE field2 = $1 AND field1 != $2 AND id != $3;') USING NEW.field2, NEW.field1, NEW.id INTO cnt;
IF cnt = 0 THEN
EXECUTE format('UPDATE table1 SET field1 = %1$s WHERE id = $2') USING 'DECLINED', NEW.field2;
END IF;
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER my_trigger BEFORE
UPDATE OF field1 ON table1 FOR EACH ROW WHEN (NEW.field1 = 'DECLINED') EXECUTE FUNCTION my_function();
However, I am getting the following error:
ERROR: syntax error at or near "("
LINE 7: cnt := EXECUTE format('SELECT count(*) FROM...
Not sure if it is relevant, but id is a text column, field1 is an ENUM, and field2 is also a text column. Could that be a problem in the SELECT statement?
Any ideas what I could be missing?
I only want to fire the second statement if cnt equals to 0
It could be rewritten as single statement:
UPDATE table1
SET field1 = ...
WHERE id = ...
AND NOT EXISTS (SELECT *
FROM table1
WHERE field2 = ...
AND field1 != ...
AND id != ...);
Using it in trigger indicates it is a try to implement partial uniqueness. If so then partial/filtered index is also an option:
CREATE UNIQUE INDEX uq ON table1(id, field1) WHERE field2 = ....;
Although #Lukasz solution may also work. I ended up using the implementation suggested by #stickybit in the comments of the question
Answer:
CREATE OR REPLACE FUNCTION my_function() RETURNS TRIGGER AS $$
DECLARE
cnt bigint;
BEGIN
IF NEW.field1 = 'DECLINED' THEN
cnt := ('SELECT count(*) FROM table1 WHERE field2 = NEW.field2 AND field1 != NEW.field1 AND id != NEW.id;')
IF cnt = 0 THEN
'UPDATE table1 SET field1 = 'DECLINED' WHERE id = NEW.field2';
END IF;
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
I'm trying to write an Oracle procedure. I have a table and currently I'm using a merge statement. When a record is changed, it updates it, if it is new, it adds it.
However, we want to keep track of changed records. So I'm adding three fields: startdate, enddate, currentflag. I don't want to update the record if there are any changes, I want to add a new record instead. But I do want to add an enddate and change the flag on the old record.
So, if I have a table like this:
TableID
Field1
Field2
Field3
StartDate
EndDate
CurrentFlag
And it has data like this
TableID Field1 Field2 Field3 StartDate EndDate CurrentFlag
001 DataA Cow Brown 3-Oct-18 Y
001 DataA Cow White 1-Sep-18 3-Oct-18 N
002 DataB Horse Dapple 3-Oct-18 Y
I want to merge in some data
TableID Field1 Field2 Field3
001 NewData Cow Black
002 DataB Horse Dapple
005 Data3 Cat Black
So that the final table looks like this
TableID Field1 Field2 Field3 StartDate EndDate CurrentFlag
001 DataA Cow Brown 3-Oct-18 10-Oct-18 N
001 DataA Cow White 1-Sep-18 3-Oct-18 N
001 NewData Cow Black 10-Oct-18 Y
002 DataB Horse Dapple 3-Oct-18 Y
005 Data3 Cat Black 10-Oct-18 Y
My pseudocode is
for each record in source file
find current record in dest table (on ID and flag = Y)
if any other fields do not match (Field1, Field2, Field3)
then update current record, set enddate, current flag to n
and add new record with startdate = sysdate, current flag is Y
if no match found, then add new record with startdate = sysdate, current flag is Y
I'm not sure how to turn that pseudocode into Oracle SQL code. Can I use the same MERGE statement, but in the WHEN MATCHED add a check to see if any of the other fields are different?
I will be doing this for several tables, a few of which have a lot of records and many fields. So I need to figure out something that works and isn't as slow as molasses.
UPDATE
I have created a procedure as suggested, with some modifications, so it works:
CREATE OR REPLACE PROCEDURE TESTPROC AS
BEGIN
DECLARE
l_count NUMBER;
CURSOR TRN is
SELECT * from sourceTable;
BEGIN
FOR each_record IN TRN
LOOP
-- if a record found but fields differ ...
l_count := 0;
SELECT COUNT(*) INTO l_count
FROM destTable DIM
WHERE each_record.TableID = DIM.TableID
and (each_record.Field1 <> DIM.Field1
or each_record.Field2 <> DIM.Field2
or each_record.Field13 <> DIM.Field3)
AND DIM.CurrentFlag = 'Y';
-- ... then update existing current record, and add with new data
IF l_count > 0 THEN
UPDATE destTable DIM
SET EndDate = sysdate
,CurrentFlag = 'N'
WHERE each_record.TableID = DIM.TableID;
INSERT INTO destTable
(TableID
, Field1
, Field2
, Field3
, StartDate
, CurrentFlag)
VALUES (each_record.TableID
, each_record.Field1
, each_record.Field2
, each_record.Field3
, sysdate
, 'Y');
COMMIT;
END IF;
-- if no record found with this key...
l_count := 0;
SELECT COUNT(*) INTO l_count
FROM destTable DIM
WHERE each_record.TableID = DIM.TableID;
-- then add a new record
IF l_count = 0 THEN
INSERT INTO destTable
(TableID
, Field1
, Field2
, Field3
, StartDate
, CurrentFlag)
VALUES (each_record.TableID
, each_record.Field1
, each_record.Field2
, each_record.Field3
, sysdate
, 'Y');
END IF;
END LOOP;
COMMIT;
END;
END TESTPROC
And on my small table, it worked nicely. Now I'm trying it on one of my larger tables (800k records, but by no means the largest table), and I'm updating this question while it runs. It's been nearly an hour, and obviously that isn't acceptable. Once my program comes back, I'll add indices on the TableID, and TableID and CurrentFlag. If indices don't help, any suggestions for the slow as molasses aspect?
You can write a Simple procedure for the same:
DECLARE
l_count NUMBER;
CURSOR C1 is
-- YOUR DATA FROM SOURCE
BEGIN
for each_record in c1
l_count := 0;
SELECT COUNT(*) into l_count from destination_table where field1=
eachrecord.field1 and .... and flag = 'Y'; -- find current record in dest table (on ID and flag = Y)
-- if any other fields do not match (Field1, Field2, Field3)
IF L_COUNT > 0 THEN
update current record, set enddate, current flag to n
END IF;
INSERT new record with startdate = sysdate, current flag is Y
END;
Mod by OP: That led to the right direction. The following code will do the trick, providing there is also an index on TableID and (TableID, CurrentFlag).
CREATE OR REPLACE PROCEDURE TESTPROC AS
BEGIN
DECLARE
l_count NUMBER;
CURSOR TRN is
SELECT * from sourceTable;
BEGIN
FOR each_record IN TRN
LOOP
-- if a record found but fields differ ...
l_count := 0;
SELECT COUNT(*) INTO l_count
FROM destTable DIM
WHERE each_record.TableID = DIM.TableID
and (each_record.Field1 <> DIM.Field1
or each_record.Field2 <> DIM.Field2
or each_record.Field13 <> DIM.Field3)
AND DIM.CurrentFlag = 'Y';
-- ... then update existing current record, and add with new data
IF l_count > 0 THEN
UPDATE destTable DIM
SET EndDate = sysdate
,CurrentFlag = 'N'
WHERE each_record.TableID = DIM.TableID;
INSERT INTO destTable
(TableID
, Field1
, Field2
, Field3
, StartDate
, CurrentFlag)
VALUES (each_record.TableID
, each_record.Field1
, each_record.Field2
, each_record.Field3
, sysdate
, 'Y');
COMMIT;
END IF;
-- if no record found with this key...
l_count := 0;
SELECT COUNT(*) INTO l_count
FROM destTable DIM
WHERE each_record.TableID = DIM.TableID;
-- then add a new record
IF l_count = 0 THEN
INSERT INTO destTable
(TableID
, Field1
, Field2
, Field3
, StartDate
, CurrentFlag)
VALUES (each_record.TableID
, each_record.Field1
, each_record.Field2
, each_record.Field3
, sysdate
, 'Y');
END IF;
END LOOP;
COMMIT;
END;
END TESTPROC
Maybe you could use triggers to perform that.
CREATE OR REPLACE TRIGGER insTableID
BEFORE INSERT OR UPDATE
ON tableID
FOR EACH ROW
DECLARE
v_exists NUMBER := -1;
BEGIN
SELECT COUNT(1) INTO v_exists FROM tableID t where t.Field1 = :new.Field1 and ... ;
IF INSERTING THEN
IF v_exist > 0 THEN
null;--your DML update statement
ELSE
null;--your DML insert statement
END;
END IF;
IF UPDATING THEN
null;--your DML statement for update the old registry and a DML for insert the new registry.
END IF;
END;
In this way, you can update the registry related to the old values and insert a new row with the new values.
I hope that this helps you to solve your problem.
I am performing bulk update operation for a record of 1 million records. I need to COMMIT in between every 5000 records how can I perform?
update tab1 t1
set (col1,col2,col3,col4)=
(select col1,col2,col3,col4 from tab_m where row_id= t1.row_id);
Per th question, if you only want to continue updating even if record fails with error logging then i think you should go with the DML error logging clause of Oracle. Hope this helps.
BEGIN
DBMS_ERRLOG.CREATE_ERROR_LOG('TAB1');
UPDATE tab1 t1
SET
(
COL1,
COL2,
COL3,
COL4
)
=
(SELECT COL1,COL2,COL3,COL4 FROM TAB_M WHERE ROW_ID= T1.ROW_ID
) LOG ERRORS REJECT LIMITED UNLIMITED;
END;
If you are looking for a solution in PLSQL you can do it by using BULK INSERT/UPDATE as below:
DECLARE
c_limit PLS_INTEGER := 100;
CURSOR employees_cur
IS
SELECT employee_id
FROM employees
WHERE department_id = department_id_in;
TYPE employee_ids_t IS TABLE OF employees.employee_id%TYPE;
l_employee_ids employee_ids_t;
BEGIN
OPEN employees_cur;
LOOP
FETCH employees_cur
BULK COLLECT INTO l_employee_ids
LIMIT c_limit; -- This will make sure that every iteration has 100 records selected
EXIT WHEN l_employee_ids.COUNT = 0;
FORALL indx IN 1 .. l_employee_ids.COUNT SAVE EXCEPTIONS
UPDATE employees emp -- Updating 100 records at 1 go.
SET emp.salary =
emp.salary + emp.salary * increase_pct_in
WHERE emp.employee_id = l_employee_ids(indx);
commit;
END LOOP;
EXCEPTION
WHEN OTHERS
THEN
IF SQLCODE = -24381
THEN
FOR indx IN 1 .. SQL%BULK_EXCEPTIONS.COUNT
LOOP
-- Caputring errors occured during update
DBMS_OUTPUT.put_line (
SQL%BULK_EXCEPTIONS (indx).ERROR_INDEX
|| ‘: ‘
|| SQL%BULK_EXCEPTIONS (indx).ERROR_CODE);
--<You can inset the error records to a table here>
END LOOP;
ELSE
RAISE;
END IF;
END;
I need to update one of the tables in my the Database with random values, so that I have written multiple update statements but its taking a lot of time for execution, I need to update only 15 columns in the table containing 100 of Columns. Could someone help me to write a PL/SQL procedure for the following statements. I have 15 columns to update. I have written with ID number of the column to be update with the variable value field
Thank you in Advance.
UPDATE MY_Table
SET COL1O=DBMS_RANDOM.STRING('A', DBMS_RANDOM.VALUE(10, 15))
where TRIM(COL1) IS NOT NULL ;
UPDATE MY_Table
SET COL11=DBMS_RANDOM.STRING('A', DBMS_RANDOM.VALUE(10, 15))
where TRIM(COL2) IS NOT NULL ;
UPDATE MY_Table
SET COL12=DBMS_RANDOM.STRING('A', DBMS_RANDOM.VALUE(8, 15))
where TRIM(COL3) IS NOT NULL ;
UPDATE MY_Table
SET COL13=DBMS_RANDOM.STRING('A', DBMS_RANDOM.VALUE(8, 15))
where TRIM(COL4) IS NOT NULL ;
UPDATE MY_Table
SET COL14=DBMS_RANDOM.STRING('A', DBMS_RANDOM.VALUE(8, 15))
where TRIM(COL5) IS NOT NULL;
UPDATE MY_Table
SET COL18=DBMS_RANDOM.STRING('A', DBMS_RANDOM.VALUE(1, 2))
where TRIM(COL18) IS NOT NULL;
UPDATE MY_Table
SET COL22=DBMS_RANDOM.VALUE(1, 1))
where TRIM(COL22) IS NOT NULL;
UPDATE MY_Table
SET COL37=DBMS_RANDOM.VALUE(1, 5))
where TRIM(COL37) IS NOT NULL;
UPDATE MY_Table
SET COL114=DBMS_RANDOM.VALUE(8, 10))
where TRIM(COL114) IS NOT NULL;
UPDATE MY_Table
SET COL140=DBMS_RANDOM.VALUE(8, 10))
where TRIM(COL140) IS NOT NULL;
UPDATE MY_Table
SET COL141=DBMS_RANDOM.VALUE(5, 15))
where TRIM(COL141) IS NOT NULL;
UPDATE MY_Table
SET COL145=DBMS_RANDOM.VALUE(8, 11))
where TRIM(COL145) IS NOT NULL;
UPDATE MY_Table
SET COL192=DBMS_RANDOM.VALUE(0.00, 9999999999999.00)
where TRIM(COL114) IS NOT NULL;
UPDATE MY_Table
SET COL193=DBMS_RANDOM.VALUE(0.00, 9999999999999.00)
where TRIM(COL114) IS NOT NULL;
UPDATE MY_Table
SET COL195=DBMS_RANDOM.VALUE(0.00, 9999999999999.00)
where TRIM(COL114) IS NOT NULL;
UPDATE MY_Table
SET COL114=DBMS_RANDOM.VALUE(5, 24))
where TRIM(COL114) IS NOT NULL;
You don't need 15 update statements, you can do this in a single statement:
UPDATE MY_Table
SET COL1 = case
when TRIM(COL1) IS NOT NULL then DBMS_RANDOM.STRING('A', DBMS_RANDOM.VALUE(10, 15))
else col1
end,
COL3 = case
when TRIM(COL3) IS NOT NULL then DBMS_RANDOM.STRING('A', DBMS_RANDOM.VALUE(8, 15))
else col3
end,
COL15 = case
when where TRIM(COL15) IS NOT NULL then DBMS_RANDOM.STRING('A', DBMS_RANDOM.VALUE(5, 15))
else col15
end
If you have many rows that do not satisfy the conditions, adding a where condition could speed up things
update my_table
set ....
where (TRIM(COL1) IS NOT NULL or
TRIM(COL3) IS NOT NULL or
TRIM(COL15) IS NOT NULL)
You can use dynamic SQL like:
begin
for i in (select n,
case
when n = 1 then 'DBMS_RANDOM.STRING(''A'', DBMS_RANDOM.VALUE(10, 15))'
when n = 10 then '...',
...
else '...'
end as val
from (
select level as n
from dual
connect by level <= 91
) where n in (10, 20)
)
loop
execute immediate 'UPDATE MY_Table
SET COL' || i.n || '=' || i.val || '
where TRIM(COL' || i.n || ') IS NOT NULL';
end loop;
end;
Can you please suggest, what is wrong with this query? It is always extracting 0 records and not inserting the data.
I have checked the select query and it is returning the rows. But I am not sure what is wrong happening on the merge part that it does not insert/update the table.
ExtractType NUMBER(9);
RecordsExtracted NUMBER(9);
CurStatus NUMBER(9);
StartDate date;
ErrorMessage NVARCHAR2(1000);
LastExtrctTimestamp DATE;
BEGIN
StartDate := sysdate;
ExtractType := 79;
-- Fetching the Last Extract Time Stamp
Select max(ExtractTimestamp) INTO LastExtrctTimestamp from ExtractRecords where Status = 2 and ExtractRecords.ExtractType= ExtractType;
IF LastExtrctTimestamp IS NULL
THEN LastExtrctTimestamp := To_Date('01/01/1901', 'MM/dd/yyyy');
END IF;
MERGE INTO Table MCTH
USING (
SELECT
val1, val2, val3, .... val1
FROM
View_RPT
WHERE TransitionDate >= LastExtrctTimestamp
) Core
ON(MCTH.valId= Core.ValId)
WHEN MATCHED THEN
UPDATE SET
MCTH.val1= Core.val1,
MCTH.val2= Core.val2,
MCTH.val3= Core.val3,
.
.
MCTH.val4= Core.val4
WHEN NOT MATCHED THEN
INSERT (MCTH.val1,MCTH.val2,MCTH.val3,MCTH.val4,
...,MCTH.val5)
VALUES (Core.val1,Core.val2,Core.val3,Core.val4,
...,Core.val5);
RecordsExtracted := SQL%RowCount;
DBMS_OUTPUT.put_line('MCTH Records Merged:' || RecordsExtracted);
COMMIT;
END;
Roll your pl/sql logic into the merge statement, and you can test whether core is returning what you expect more easily:
merge into
margincalltransitionhistory mcth
using (
select
margincalltransitionhistoryid,
margincallid,
fromworkflowstatename,
toworkflowstatename,
transitiondate,
transitionbyname,
transitioncomment
from
margincalltranhistory_rpt
where
transitiondate >= (
select coalesce(max(extracttimestamp), date '1901-01-01')
from extractrecords
where status = 2 and
extracttype = 79)
) core
...
And for the love of God clean your code up -- I have no idea how you can work with that mess.