For loop update better alternative - sql

In Oracle 11g, I am using the following in a procedure.. can someone please provide a better solution to achieve the same results.
FOR REC IN
(SELECT E.EMP FROM EMPLOYEE E
JOIN
COMPANY C ON E.EMP=C.EMP
WHERE C.FLAG='Y')
LOOP
UPDATE EMPLOYEE SET FLAG='Y' WHERE EMP=REC.EMP;
END LOOP;
Is there a more efficient/better way to do this? I feel as if this method will run one update statement for each record found (Please correct me if I am wrong).
Here's the is actual code in full:
create or replace
PROCEDURE ACTION_MSC AS
BEGIN
-- ALL MIGRATED CONTACTS, CANDIDATES, COMPANIES, JOBS
-- ALL MIGRATED CANDIDATES, CONTACTS
FOR REC IN (SELECT DISTINCT AC.PEOPLE_HEX
FROM ACTION AC JOIN PEOPLE P ON AC.PEOPLE_HEX=P.PEOPLE_HEX
WHERE P.TO_MIGRATE='Y')
LOOP
UPDATE ACTION SET TO_MIGRATE='Y' WHERE PEOPLE_HEX=REC.PEOPLE_HEX;
END LOOP;
-- ALL MIGRATED COMPANIES
FOR REC IN (SELECT DISTINCT AC.COMPANY_HEX
FROM ACTION AC JOIN COMPANY CM ON AC.COMPANY_HEX=CM.COMPANY_HEX
WHERE CM.TO_MIGRATE='Y')
LOOP
UPDATE ACTION SET TO_MIGRATE='Y' WHERE COMPANY_HEX=REC.COMPANY_HEX;
END LOOP;
-- ALL MIGRATED JOBS
FOR REC IN (SELECT DISTINCT AC.JOB_HEX
FROM ACTION AC JOIN "JOB" J ON AC.JOB_HEX=J.JOB_HEX
WHERE J.TO_MIGRATE='Y')
LOOP
UPDATE ACTION SET TO_MIGRATE='Y' WHERE JOB_HEX=REC.JOB_HEX;
END LOOP;
COMMIT;
END ACTION_MSC;

You're right, it will do one update for each record found. Looks like you could just do:
UPDATE EMPLOYEE SET FLAG = 'Y'
WHERE EMP IN (SELECT EMP FROM COMPANY WHERE FLAG = 'Y')
AND FLAG != 'Y';
A single update will generally be faster and more efficient than multiple individual row updates in a loop; see this answer for another example. Apart from anything else, you're reducing the number of context switches between PL/SQL and SQL, which add up if you have a lot of rows. You could always benchmark this with your own data, of course.
I've added a check of the current flag state so you don't do a pointless update with no chamges.
It's fairly easy to compare the approaches to see that a single update is faster than one in a loop; with some contrived data:
create table people (id number, people_hex varchar2(16), to_migrate varchar2(1));
insert into people (id, people_hex, to_migrate)
select level, to_char(level - 1, 'xx'), 'Y'
from dual
connect by level <= 100;
create table action (id number, people_hex varchar2(16), to_migrate varchar2(1));
insert into action (id, people_hex, to_migrate)
select level, to_char(mod(level, 200), 'xx'), 'N'
from dual
connect by level <= 500000;
All of these will update half the rows in the action table. Updating in a loop:
begin
for rec in (select distinct ac.people_hex
from action ac join people p on ac.people_hex=p.people_hex
where p.to_migrate='Y')
loop
update action set to_migrate='Y' where people_hex=rec.people_hex;
end loop;
end;
/
Elapsed: 00:00:10.87
Single update (after rollback; I've left this in a block to mimic your procedure):
begin
update action set to_migrate = 'Y'
where people_hex in (select people_hex from people where to_migrate = 'Y');
end;
/
Elapsed: 00:00:07.14
Merge (after rollback):
begin
merge into action a
using (select people_hex, to_migrate from people where to_migrate = 'Y') p
on (a.people_hex = p.people_hex)
when matched then update set a.to_migrate = p.to_migrate;
end;
/
Elapsed: 00:00:07.00
There's some variation from repeated runs, particularly that update and merge are usually pretty close but sometimes swap which is faster in my environment; but both are always significantly faster than updating in a loop. You can repeat this in your own environment and with your own data spread and volumes, and you should if performance is that critical; but a single update is going to be faster than the loop. Whether you use update or merge isn't likely to make much difference.

Related

I must not add row with 2 same values in columns

Can I enter a rule when creating a table so that I, as an author, can't add a review to a product I've already reviewed?
I've been thinking about triggers, but I don't know exactly how to set it. In the workbench I can check it via this code:
declare
pocet number := 0;
begin
SELECT COUNT(a."id_recenze")
INTO pocet
FROM "recenze" a
INNER JOIN (SELECT "id_komponenty", "id_autora"
FROM "recenze"
GROUP BY "id_komponenty", "id_autora"
HAVING COUNT(*) > 1) b
ON a."id_komponenty" = b."id_komponenty" AND a."id_autora" = b."id_autora";
if pocet > 2 then
DBMS_OUTPUT.put_line('Nesmite vytvaret recenzi na komponentu, u ktere jste uz recenzoval');
else
DBMS_OUTPUT.put_line('Vysledek je v poradku');
end if;
end;
But I don't want to be able to create these records.
Can someone help me, how i can do it?
I use APEX by Oracle.
EDIT: (24.4. 10:35)
In a nutshell, i don't want records, where id_autora & id_komponenty are more times. For example i don't want this:
id_recenze(PK) id_autora id_komponenty
1 2 3
2 2 3
After your explanation I see you could still use a unique index but you want to create it on id_komponenty and id_autora. That would throw an error if you tried to add a duplicate.
But I see from your code that you are trying to update with the most recent values if it's duplicated. In that case I would abandon the idea of the index and the trigger, and I would replace the INSERT statement (not shown) with Oracle's MERGE statement. This allows a simultaneous logic for insert and update, plus you get to define the criteria when you do either. It would look something like:
MERGE INTO recenze r
USING (Select <newid_komponenty> AS newk
,<newid_autora AS newa> from Dual) n
ON (r.id_komponenty=n.newk And r.id_autora=n.newa)
WHEN MATCHED THEN UPDATE SET {your update logic here}
WHEN NOT MATCHED THEN INSERT {your insert logic here}
Personally, I try to stay away from triggers when there are other solutions available. By altering your Insert statement to this Merge you get the same effect with one less DB object to keep track of and maintain.
I get it.
CREATE TRIGGER "nesmiPridat" BEFORE INSERT ON "recenze"
FOR EACH ROW BEGIN
DECLARE pocet INT(2);
DECLARE smazat INT(2);
SET pocet := (SELECT COUNT("id_recenze") FROM "recenze" WHERE (NEW."id_autora" = "id_autora") AND (NEW."id_komponenty" = "id_komponenty"));
SET smazat :=(SELECT "id_recenze" FROM "recenze" WHERE (NEW."id_autora" = "id_autora") AND (NEW."id_komponenty" = "id_komponenty"));
IF (pocet > 0) THEN
DELETE FROM "recenze" WHERE smazat."id_recenze" = NEW."id_recenze";
END IF;
END;

SQL optimization

Right now I am facing with an optimization problem.
I have a list of aticles (17000+) and some of them are inactive. The list is provided by the client into an EXCEL file and he asked me to resend them (obviosly only those active).
For this, I have to filter the production database based on the list provided by the customer. Unfortunately, I cannot load the list into a sepparate table from production and then join with master article table but I was able to do this into a UAT database, linked with production one.
The production article master data contains 200 000 000+ rows but filtering it, I can redure to around 80 000 000.
I order to retreive only the active article from production, I was thinking to use collections but it seems the last filter is taking tooooooo long.
Here are my code:
declare
type t_art is table of number index by pls_integer;
v_art t_art;
v_filtered t_art;
idx number := 0;
begin
for i in (select * from test_table#UAT_DATABASE)
loop
idx := idx + 1;
v_art(idx) := i.art_nr;
end loop;
for j in v_art.first .. v_art.last
loop
select distinct art_nr
bulk collect into v_filtered
from production_article_master_data
where status = 0 -- status is active
and sperr_stat in (0, 2)
and trunc(valid_until) >= trunc(sysdate)
and art_nr = v_art(j);
end loop;
end;
Explanation: from UAT database, via DBLink, I am insertinting the list into an ASSOCIATIVE ARRAY in production (v_art). Then, for each value in v_art(17000+ distinct articles), I am filtering with production article master data, returning in 2nd ASSOCITIAVE ARRAY, only the valid articles (there might be 6-8000).
Unfortunately, this filtering action is taking hours.
Can someone provide me some hints how to improve this in orde to decrease the execution time, please?
Thank you,
Just use SQL and join the two tables:
select distinct p.art_nr
from production_article_master_data p
INNER JOIN
test_table#UAT_DATABASE t
ON ( p.art_nr = t.art_nr )
where status = 0 -- status is active
and sperr_stat in (0, 2)
and trunc(valid_until) >= trunc(sysdate)
If you have to do it in PL/SQL then:
CREATE OR REPLACE TYPE numberlist IS TABLE OF NUMBER;
/
declare
-- If you are using Oracle 12c you should be able to declare the
-- type in the PL/SQL block. In earlier versions you will need to
-- declare it in the SQL scope instead.
-- TYPE numberlist IS TABLE OF NUMBER;
v_art NUMBERLIST;
v_filtered NUMBERLIST;
begin
select art_nr
BULK COLLECT INTO v_art
from test_table#UAT_DATABASE;
select distinct art_nr
bulk collect into v_filtered
from production_article_master_data
where status = 0 -- status is active
and sperr_stat in (0, 2)
and trunc(valid_until) >= trunc(sysdate)
and art_nr MEMBER OF v_art;
end;

Table not updated by trigger and procedure

I have a system which people do orders on, each order has actions, and a table exists called cm_ord_order_action. Sometimes actions fail, so I need to make a trigger that gets information for the failed order action and populates a table called cm_ord_failed_order.
the trigger is shown below:
CREATE OR REPLACE TRIGGER CM.TRGID_CM_ORD_FAILED_ORDER
AFTER UPDATE ON CM.CM_ORD_ORDER_ACTION
FOR EACH ROW
BEGIN
IF (:new.STATUS = 'FA') THEN
CM.CM_FAILED_ORDER_MLT(:new.order_unit_id, :new.order_id, :new.action_type);
END IF;
END;
/
This trigger passes parameters to a procedure which updates the table:
CREATE OR REPLACE PROCEDURE CM_FAILED_ORDER_MLT(
v_order_unit_id NUMBER,
v_order_id in NUMBER,
v_action_type in VARCHAR)
AS
v_lob varchar(100);
v_step varchar(100);
v_error varchar(200);
BEGIN
SELECT
ITEM.LOB_NAME, ST.STEP_NAME, ASS.STEP_ERROR
INTO v_lob, v_step, v_error
FROM
CM.CM_ORD_ORDER_ACTION OA
INNER JOIN CM.CM_ORD_ASSIGNMENTS ASS
ON OA.ORDER_UNIT_ID = ASS.ORDER_ACTION_ID
INNER JOIN CM.CM_ORD_PROCESS_STEP ST
ON ST.ORD_PROCESS_STEP_ID = ASS.STEP_ID
INNER JOIN CM.CM_ORD_AP_ITEM ITEM
ON ITEM.AP_SUBSCRIBER_ID = OA.AP_SUBSCRIBER_ID
WHERE ASS.COMPLETION_STATUS = 'FA'
AND OA.ORDER_ID = v_order_id
AND OA.ORDER_UNIT_ID = v_order_unit_id
GROUP BY OA.ORDER_UNIT_ID, ITEM.LOB_NAME, ST.STEP_NAME, ASS.STEP_ERROR;
INSERT INTO CM_ORD_FAILED_ORDER (ORDER_ID, FAILED_DATE, ORDER_ACTION_ID, ACTION_TYPE, LOB, STEP, ERROR)
VALUES (v_order_id, sysdate, v_order_unit_id, v_action_type, v_lob, v_step, v_error);
END CM_FAILED_ORDER_MLT;
/
There is probably something wrong here because:
A - Even though the trigger is for after update on cm_ord_order_action, when the trigger is enabled, the status is not being updated, but when I disable the trigger the status is updated.
B - the table cm_ord_failed_order is not being populated with the information.
Thanks in advance.
You can avoid the mutating table error your script is somehow ignoring or discarding by doing the insert directly in the trigger, where you have the details from the row being updated in the :NEW pseudorecord and don't have to query it again. You can also do an insert...select without needing local variables.
I think this is a rough translation:
CREATE OR REPLACE TRIGGER CM.TRGID_CM_ORD_FAILED_ORDER
AFTER UPDATE ON CM.CM_ORD_ORDER_ACTION
FOR EACH ROW
WHEN (new.STATUS = 'FA')
BEGIN
INSERT INTO CM_ORD_FAILED_ORDER (ORDER_ID, FAILED_DATE, ORDER_ACTION_ID, ACTION_TYPE,
LOB, STEP, ERROR)
SELECT
DISTINCT :new.ORDER_ID, sysdate, :new.Order_Unit_Id, :new.Action_Type,
ITEM.LOB_NAME, ST.STEP_NAME, ASS.STEP_ERROR
FROM
CM.CM_ORD_ASSIGNMENTS ASS
INNER JOIN CM.CM_ORD_PROCESS_STEP ST
ON ST.ORD_PROCESS_STEP_ID = ASS.STEP_ID
CROSS JOIN CM.CM_ORD_AP_ITEM ITEM
WHERE ASS.ORDER_ACTION_ID = :new.ORDER_UNIT_ID
AND ASS.COMPLETION_STATUS = :new.STATUS
AND ITEM.AP_SUBSCRIBER_ID = :new.AP_SUBSCRIBER_ID;
END CM_FAILED_ORDER_MLT;
/
The DISTINCT (instead of grouping) and CROSS JOIN suggest you're missing a join condition in your original query, but without your table structures and data that may not be the case.
Alternatively you could keep the insert in a procedure, but pass :newAP_SUBSCRIBER_ID` as another argument, since that seems to be the only column you need from the mutating table that you aren't already passing in.
Your trigger could also be a BEFORE UPDATE rather than AFTER UPDATE.
An alternative to Alex's solution that avoids the need for a cross join would be to change the procedure to:
create or replace procedure cm_failed_order_mlt (v_order_unit_id number,
v_order_id in number,
v_action_type in varchar,
v_ap_subscriber_id in cm.cm_ord_order_action.ap_subscriber_id%type)
as
v_lob varchar(100);
v_step varchar(100);
v_error varchar(200);
begin
select distinct lob_name
into v_lob
from cm.cm_ord_ap_item
where ap_subscriber_id = v_ap_subscriber_id;
select distinct st.step_name, ass.step_error
into v_step, v_error
from cm.cm_ord_assignments ass
inner join cm.cm_ord_process_step st on st.ord_process_step_id = ass.step_id
where ass.completion_status = 'FA'
and ass.order_action_id = v_order_id
and oa.order_unit_id = v_order_unit_id;
insert into cm_ord_failed_order (order_id, failed_date, order_action_id, action_type, lob, step, error)
values (v_order_id, sysdate, v_order_unit_id, v_action_type, v_lob, v_step, v_error);
end cm_failed_order_mlt;
/
Or, to remove the cross join in Alex's solution, simply replace it with a scalar subquery, e.g.:
select (select distinct lob_name from cm.cm_ord_ap_item where ap_subscriber_id = v_ap_subscriber_id), ...
Like #JustinCave said, it is clear that you have mutating table error:
Mutating table exceptions occur when we try to reference the
triggering table in a query from within row-level trigger code
On a trigger on CM_ORD_ORDER_ACTION you are selecting from that same table. Try to redo the query in the procedure without referencing CM_ORD_ORDER_ACTION.

SQL optimization question (oracle)

Edit: Please answer one of the two answers I ask. I know there are other options that would be better in a different case. These other potential options (partitioning the table, running as one large delete statement w/o committing in batches, etc) are NOT options in my case due to things outside my control.
I have several very large tables to delete from. All have the same foreign key that is indexed. I need to delete certain records from all tables.
table source
id --primary_key
import_source --used for choosing the ids to delete
table t1
id --foreign key
--other fields
table t2
id --foreign key
--different other fields
Usually when doing a delete like this, I'll put together a loop to step through all the ids:
declare
my_counter integer := 0;
begin
for cur in (
select id from source where import_source = 'bad.txt'
) loop
begin
delete from source where id = cur.id;
delete from t1 where id = cur.id;
delete from t2 where id = cur.id;
my_counter := my_counter + 1;
if my_counter > 500 then
my_counter := 0;
commit;
end if;
end;
end loop;
commit;
end;
However, in some code I saw elsewhere, it was put together in separate loops, one for each delete.
declare
type import_ids is table of integer index by pls_integer;
my_count integer := 0;
begin
select id bulk collect into my_import_ids from source where import_source = 'bad.txt'
for h in 1..my_import_ids.count
delete from t1 where id = my_import_ids(h);
--do commit check
end loop;
for h in 1..my_import_ids.count
delete from t2 where id = my_import_ids(h);
--do commit check
end loop;
--do commit check will be replaced with the same chunk to commit every 500 rows as the above query
So I need one of the following answered:
1) Which of these is better?
2) How can I find out which is better for my particular case? (IE if it depends on how many tables I have, how big they are, etc)
Edit:
I must do this in a loop due to the size of these tables. I will be deleting thousands of records from tables with hundreds of millions of records. This is happening on a system that can't afford to have the tables locked for that long.
EDIT:
NOTE: I am required to commit in batches. The amount of data is too large to do it in one batch. The rollback tables will crash our database.
If there is a way to commit in batches other than looping, I'd be willing to hear it. Otherwise, don't bother saying that I shouldn't use a loop...
Why loop at all?
delete from t1 where id IN (select id from source where import_source = 'bad.txt';
delete from t2 where id IN (select id from source where import_source = 'bad.txt';
delete from source where import_source = 'bad.txt'
That's using standard SQL. I don't know Oracle specifically, but many DBMSes also feature multi-table JOIN-based DELETEs as well that would let you do the whole thing in a single statement.
David,
If you insist on commiting, you can use the following code:
declare
type import_ids is table of integer index by pls_integer;
my_import_ids import_ids;
cursor c is select id from source where import_source = 'bad.txt';
begin
open c;
loop
fetch c bulk collect into my_import_ids limit 500;
forall h in 1..my_import_ids.count
delete from t1 where id = my_import_ids(h);
forall h in 1..my_import_ids.count
delete from t2 where id = my_import_ids(h);
commit;
exit when c%notfound;
end loop;
close c;
end;
This program fetches ids by pieces of 500 rows, deleting and commiting each piece. It should be much faster then row-by-row processing, because bulk collect and forall works as a single operation (in a single round-trip to and from database), thus minimizing the number of context switches. See Bulk Binds, Forall, Bulk Collect for details.
First of all, you shouldn't commit in the loop - it is not efficient (generates lots of redo) and if some error occurrs, you can't rollback.
As mentioned in previous answers, you should issue single deletes, or, if you are deleting most of the records, then it could be more optimal to create new tables with remaining rows, drop old ones and rename the new ones to old names.
Something like this:
CREATE TABLE new_table as select * from old_table where <filter only remaining rows>;
index new_table
grant on new table
add constraints on new_table
etc on new_table
drop table old_table
rename new_table to old_table;
See also Ask Tom
Larry Lustig is right that you don't need a loop. Nonetheless there may be some benefit in doing the delete in smaller chunks. Here PL/SQL bulk binds can improve speed greatly:
declare
type import_ids is table of integer index by pls_integer;
my_count integer := 0;
begin
select id bulk collect into my_import_ids from source where import_source = 'bad.txt'
forall h in 1..my_import_ids.count
delete from t1 where id = my_import_ids(h);
forall h in 1..my_import_ids.count
delete from t2 where id = my_import_ids(h);
The way I wrote it it does it all at once, in which case yeah the single SQL is better. But you can change your loop conditions to break it into chunks. The key points are
don't commit on every row. If anything, commit only every N rows.
When using chunks of N, don't run the delete in an ordinary loop. Use forall to run the delete as a bulk bind, which is much faster.
The reason, aside from the overhead of commits, is that each time you execute an SQL statement inside PL/SQL code it essentially does a context switch. Bulk binds avoid that.
You may try partitioning anyway to use parallel execution, not just to drop one partition. The Oracle documentation may prove useful in setting this up. Each partition would use it's own rollback segment in this case.
If you are doing the delete from the source before the t1/t2 deletes, that suggests you don't have referential integrity constraints (as otherwise you'd get errors saying child records exist).
I'd go for creating the constraint with ON DELETE CASCADE. Then a simple
DECLARE
v_cnt NUMBER := 1;
BEGIN
WHILE v_cnt > 0 LOOP
DELETE FROM source WHERE import_source = 'bad.txt' and rownum < 5000;
v_cnt := SQL%ROWCOUNT;
COMMIT;
END LOOP;
END;
The child records would get deleted automatically.
If you can't have the ON DELETE CASCADE, I'd go with a GLOBAL TEMPORARY TABLE with ON COMMIT DELETE ROWS
DECLARE
v_cnt NUMBER := 1;
BEGIN
WHILE v_cnt > 0 LOOP
INSERT INTO temp (id)
SELECT id FROM source WHERE import_source = 'bad.txt' and rownum < 5000;
v_cnt := SQL%ROWCOUNT;
DELETE FROM t1 WHERE id IN (SELECT id FROM temp);
DELETE FROM t2 WHERE id IN (SELECT id FROM temp);
DELETE FROM source WHERE id IN (SELECT id FROM temp);
COMMIT;
END LOOP;
END;
I'd also go for the largest chunk your DBA will allow.
I'd expect each transaction to last for at least a minute. More frequent commits would be a waste.
This is happening on a system that
can't afford to have the tables locked
for that long.
Oracle doesn't lock tables, only rows. I'm assuming no-one will be locking the rows you are deleting (or at least not for long). So locking is not an issue.

Oracle: how to UPSERT (update or insert into a table?)

The UPSERT operation either updates or inserts a row in a table, depending if the table already has a row that matches the data:
if table t has a row exists that has key X:
update t set mystuff... where mykey=X
else
insert into t mystuff...
Since Oracle doesn't have a specific UPSERT statement, what's the best way to do this?
The MERGE statement merges data between two tables. Using DUAL
allows us to use this command. Note that this is not protected against concurrent access.
create or replace
procedure ups(xa number)
as
begin
merge into mergetest m using dual on (a = xa)
when not matched then insert (a,b) values (xa,1)
when matched then update set b = b+1;
end ups;
/
drop table mergetest;
create table mergetest(a number, b number);
call ups(10);
call ups(10);
call ups(20);
select * from mergetest;
A B
---------------------- ----------------------
10 2
20 1
The dual example above which is in PL/SQL was great becuase I wanted to do something similar, but I wanted it client side...so here is the SQL I used to send a similar statement direct from some C#
MERGE INTO Employee USING dual ON ( "id"=2097153 )
WHEN MATCHED THEN UPDATE SET "last"="smith" , "name"="john"
WHEN NOT MATCHED THEN INSERT ("id","last","name")
VALUES ( 2097153,"smith", "john" )
However from a C# perspective this provide to be slower than doing the update and seeing if the rows affected was 0 and doing the insert if it was.
An alternative to MERGE (the "old fashioned way"):
begin
insert into t (mykey, mystuff)
values ('X', 123);
exception
when dup_val_on_index then
update t
set mystuff = 123
where mykey = 'X';
end;
Another alternative without the exception check:
UPDATE tablename
SET val1 = in_val1,
val2 = in_val2
WHERE val3 = in_val3;
IF ( sql%rowcount = 0 )
THEN
INSERT INTO tablename
VALUES (in_val1, in_val2, in_val3);
END IF;
insert if not exists
update:
INSERT INTO mytable (id1, t1)
SELECT 11, 'x1' FROM DUAL
WHERE NOT EXISTS (SELECT id1 FROM mytble WHERE id1 = 11);
UPDATE mytable SET t1 = 'x1' WHERE id1 = 11;
None of the answers given so far is safe in the face of concurrent accesses, as pointed out in Tim Sylvester's comment, and will raise exceptions in case of races. To fix that, the insert/update combo must be wrapped in some kind of loop statement, so that in case of an exception the whole thing is retried.
As an example, here's how Grommit's code can be wrapped in a loop to make it safe when run concurrently:
PROCEDURE MyProc (
...
) IS
BEGIN
LOOP
BEGIN
MERGE INTO Employee USING dual ON ( "id"=2097153 )
WHEN MATCHED THEN UPDATE SET "last"="smith" , "name"="john"
WHEN NOT MATCHED THEN INSERT ("id","last","name")
VALUES ( 2097153,"smith", "john" );
EXIT; -- success? -> exit loop
EXCEPTION
WHEN NO_DATA_FOUND THEN -- the entry was concurrently deleted
NULL; -- exception? -> no op, i.e. continue looping
WHEN DUP_VAL_ON_INDEX THEN -- an entry was concurrently inserted
NULL; -- exception? -> no op, i.e. continue looping
END;
END LOOP;
END;
N.B. In transaction mode SERIALIZABLE, which I don't recommend btw, you might run into
ORA-08177: can't serialize access for this transaction exceptions instead.
I'd like Grommit answer, except it require dupe values. I found solution where it may appear once: http://forums.devshed.com/showpost.php?p=1182653&postcount=2
MERGE INTO KBS.NUFUS_MUHTARLIK B
USING (
SELECT '028-01' CILT, '25' SAYFA, '6' KUTUK, '46603404838' MERNIS_NO
FROM DUAL
) E
ON (B.MERNIS_NO = E.MERNIS_NO)
WHEN MATCHED THEN
UPDATE SET B.CILT = E.CILT, B.SAYFA = E.SAYFA, B.KUTUK = E.KUTUK
WHEN NOT MATCHED THEN
INSERT ( CILT, SAYFA, KUTUK, MERNIS_NO)
VALUES (E.CILT, E.SAYFA, E.KUTUK, E.MERNIS_NO);
I've been using the first code sample for years. Notice notfound rather than count.
UPDATE tablename SET val1 = in_val1, val2 = in_val2
WHERE val3 = in_val3;
IF ( sql%notfound ) THEN
INSERT INTO tablename
VALUES (in_val1, in_val2, in_val3);
END IF;
The code below is the possibly new and improved code
MERGE INTO tablename USING dual ON ( val3 = in_val3 )
WHEN MATCHED THEN UPDATE SET val1 = in_val1, val2 = in_val2
WHEN NOT MATCHED THEN INSERT
VALUES (in_val1, in_val2, in_val3)
In the first example the update does an index lookup. It has to, in order to update the right row. Oracle opens an implicit cursor, and we use it to wrap a corresponding insert so we know that the insert will only happen when the key does not exist. But the insert is an independent command and it has to do a second lookup. I don't know the inner workings of the merge command but since the command is a single unit, Oracle could execute the correct insert or update with a single index lookup.
I think merge is better when you do have some processing to be done that means taking data from some tables and updating a table, possibly inserting or deleting rows. But for the single row case, you may consider the first case since the syntax is more common.
A note regarding the two solutions that suggest:
1) Insert, if exception then update,
or
2) Update, if sql%rowcount = 0 then insert
The question of whether to insert or update first is also application dependent. Are you expecting more inserts or more updates? The one that is most likely to succeed should go first.
If you pick the wrong one you will get a bunch of unnecessary index reads. Not a huge deal but still something to consider.
Try this,
insert into b_building_property (
select
'AREA_IN_COMMON_USE_DOUBLE','Area in Common Use','DOUBLE', null, 9000, 9
from dual
)
minus
(
select * from b_building_property where id = 9
)
;
From http://www.praetoriate.com/oracle_tips_upserts.htm:
"In Oracle9i, an UPSERT can accomplish this task in a single statement:"
INSERT
FIRST WHEN
credit_limit >=100000
THEN INTO
rich_customers
VALUES(cust_id,cust_credit_limit)
INTO customers
ELSE
INTO customers SELECT * FROM new_customers;