Insert or update table - sql

I have a list of 100k ids in a file. I want to iterate through these ids:
for each id, check if id is in a table:
If it is, update its updated_date flag
If not, add a new record (id, updated_date)
I have researched and found MERGE clause. The downside is, MERGE requires the ids to be in a table. I am only allowed to create a temporary table if necessary.
Can anyne point me in the right direction? It must be a script that I can run on my database, not in code.
merge into MyTable x
using ('111', '222', all my ids) b
on (x.id = b.id)
when not matched then
insert (id, updated_date) values (b.id, sysdate)
when matched then
update set x.updated_date = sysdate;
EDIT: I am now able to use a temporary table if it's my only option.

Given that you say you can't create a temporary table, one way might be to convert your list of ids into a set of union all'd selects, eg:
123,
234,
...
999
becomes
select 123 id from dual union all
select 234 id from dual union all
...
select 999 id from dual
You could then use that in your merge statement:
merge into MyTable x
using (select 123 id from dual union all
select 234 id from dual union all
...
select 999 id from dual) b
on (x.id = b.id)
when not matched then insert (id, updated_date) values (b.id, sysdate)
when matched then update set x.updated_date = sysdate;
If you've really got 100k ids, it might take a while to parse the statement, however! You might want to split up the queries and have several merge statements.
One other thought - is there not an existing GTT that you could "borrow" to store your data?

If you have access to the file from your Oracle server then you can define an external table, which will allow you to read from the file using SQL.
The syntax is based on SQL*Loader, and it's maybe not something you'd want to do for a casual job, more of a recurring task.
Alternatively you could use SQL*Loader itself to load it into a table, or even ODBC from a Microsoft Access or similar database.
Another option is to run 100,000 inserts. You can make this perform much better by taking each 100 or so inserts and wrapping them in an anonymous block, which saves roundtrips to the database, so instead of:
insert into tmp values(1);
insert into tmp values(12);
insert into tmp values(145);
insert into tmp values(234);
insert into tmp values(245);
insert into tmp values(345);
....
insert into tmp values(112425);
use ...
begin
insert into tmp values(1);
insert into tmp values(12);
insert into tmp values(145);
insert into tmp values(234);
...
insert into tmp values(245);
end;
/
begin
insert into tmp values(345);
...
insert into tmp values(112425);
end;
/
If it was a regular task I'd definitely go for an external table though.

Related

Multiple merge statements in a snowflake transaction

I'm new to snowflake and just tried creating a stored procedure to read deltas from a stream table A and execute two merge statements into tables B & C, but I keep getting this error message "Multiple SQL statements in a single API call are not supported; use one API call per statement instead."
Here's my stored procedure. The idea is to call it from a task when system$stream_has_data('tableA')
create or replace procedure write_delta()
return varchar
language sql as
$$
begin transaction;
-- 1/2. load table B
merge into tableB as dest
using tableA as src
on src.id = dest.id
when not matched
then insert (id, value) values( src.id, src.value);
-- 2/2. load tableC
merge into tableC as dest
using tableA as src
on src.id = dest.id
when not matched
then insert (id, value) values( src.id, src.value);
commit;
$$;
Snowflake SQL stored procedures take two forms.
Form 1: A single SQL statement. This is mostly useful when calling from a task to execute a single statment.
Form 2: A Snowflake Scripting block. The following script shows your code converted to a stored procedure with a scripting block:
create or replace table table1 (id int, value string);
create or replace table tableB like table1;
create or replace table tableC like table1;
create or replace stream tableA on table table1;
insert into table1(id, value) values (1, 'A'), (2, 'B');
create or replace procedure myprocedure()
returns varchar
language sql
as
$$
begin
begin transaction;
-- 1/2. load table B
merge into tableB as dest
using tableA as src
on src.id = dest.id
when not matched
then insert (id, value) values( src.id, src.value);
-- 2/2. load tableC
merge into tableC as dest
using tableA as src
on src.id = dest.id
when not matched
then insert (id, value) values( src.id, src.value);
commit;
end;
$$
;
call myprocedure();
select * from tableB;
select * from tableC;
You can get more information on how to write them here: https://docs.snowflake.com/en/developer-guide/snowflake-scripting/index.html
If you want to execute multiple statements, you'll need to run the stored procedure using Snowflake Scripting, JavaScript, Java, or Python.

Solution to Oracle mutating trigger

I am stuck in a small requirement.
my table should restrict if any overlapping data is getting inserted or updated.
Below is my try so far:
CREATE TABLE my_table (
ID NUMBER,
startdate DATE,
enddate DATE,
CONSTRAINT my_table_pk PRIMARY KEY ( ID,startdate,enddate )
);
/
CREATE OR REPLACE TRIGGER trg_my_table_biu
BEFORE INSERT OR UPDATE
ON my_table
FOR EACH ROW
DECLARE
v_count NUMBER;
BEGIN
SELECT COUNT(*)
INTO v_count
FROM my_table
WHERE id = :new.id
AND startdate < = :new.enddate
AND enddate >= :new.startdate;
IF v_count >= 1 THEN
raise_application_error( -20001, 'Cannot make the data overlapped.!' );
END IF;
END;
/
--existing data - good data - Result: Success
INSERT INTO my_table VALUES (1, to_date('01/02/2018','dd/mm/yyyy '),to_date('01/03/2018','dd/mm/yyyy '));
--1 good data - Result: Success
INSERT INTO my_table VALUES (1, to_date('01/01/2018','dd/mm/yyyy '),to_date('15/01/2018','dd/mm/yyyy '));
--2 good data - Result: Success
INSERT INTO my_table VALUES (1, to_date('02/03/2018','dd/mm/yyyy '),to_date('31/03/2018','dd/mm/yyyy '));
--3 bad data - Result: Success
INSERT INTO MY_TABLE VALUES (1, TO_DATE('01/01/2018','dd/mm/yyyy '),TO_DATE('01/04/2018','dd/mm/yyyy '));
--4 bad data - Result: Success
INSERT INTO my_table VALUES (1, to_date('15/01/2018','dd/mm/yyyy '),to_date('02/02/2018','dd/mm/yyyy '));
--5 bad data - Result: Success
INSERT INTO my_table VALUES (1, to_date('16/02/2018','dd/mm/yyyy '),to_date('15/03/2018','dd/mm/yyyy '));
--6 bad data - Result: Success
INSERT INTO my_table VALUES (1, to_date('15/02/2018','dd/mm/yyyy '),to_date('20/02/2018','dd/mm/yyyy '));
--7 good data - Result: Fail
UPDATE my_table
SET enddate = TO_DATE('31/03/2018','dd/mm/yyyy') + 1
WHERE startdate = TO_DATE('02/03/2018','dd/mm/yyyy');
For the 7th statement ie, UPDATE. I am facing mutaing table error.
Please help me here.
Thanks in advance.
As #mic.sca's answer says, triggers are a poor/tricky way to implement rules like this. What you really want is a constraint that can work at table-level rather than row-level. ANSI SQL would call this an "assertion", but no DBMS vendor has yet implemented this to date (though it seems that Oracle is seriously considering doing so in a future release).
However, there is a way to simulate such a constraint/assertion using materialized views. I blogged about this way back in 2004 - your requirement is very like my example 2 there. Modified for your table this would be:
create materialized view my_table_mv1
refresh complete on commit as
select 1 dummy
from my_table t1, my_table t2
where t1.id = t2.id
and t1.startdate <= t2.enddate
and t1.enddate >= t2.startdate;
alter table my_table_mv1
add constraint my_table_mv1_chk
check (1=0) deferrable;
This materialized view only contains instances of overlaps, so should always be empty. As soon as an overlap is created, a row is inserted into the materialized view - but immediately violates its check constraint, which can never be satisfied!
Note that this is a deferred constraint, i.e. it will not be checked until commit time.
By the way, I don't know why I didn't use ANSI join syntax back in 2004 - maybe I just wasn't using it then. However, there are cases (I think more with outer joins) where materialized views can't be created using ANSI syntax but can be with the equivalent old-style syntax!
The mutating table error occurs because during the update in the trigger you are selecting the same row that you are updating.
My advice would be not to use a trigger and instead doing all the insert and update using stored procedures that check that the dates do not overlap before doing the operation.
To prevent concurent operation on the same id. you need as well to have a mechanism to serialize the possible concurrent sessions running the operations on the data. You might have a separate parent table with your ids and all the operations which operate on a specific Id should do a select for update on that id on the parent table before running insert or updates on my_table.
Trigger might look cool but can create maintenance headaches in the long run as they are not that explicit and they apply on all the operations on a table(http://www.oracle.com/technetwork/testcontent/o58asktom-101055.html).
By the way if two users update concurrently two rows with the same id with your trigger you could end up with overlapping values without your trigger raising any error (though it is very unlikely).

PL/SQL Insert procedure, insert if the row doesn't exist

Below is my procedure. It inserts, but every time I execute the procedure it inserts a duplicated row. I don't want that, but i have tried everything and I don't know how to resolve the issue.
My Code :
CREATE OR REPLACE PROCEDURE Insert_Cidades(p_NOME CIDADE.NOME_CIDADE%TYPE)
IS
BEGIN
INSERT INTO CIDADE(COD_CIDADE,NOME_CIDADE) VALUES(seq_id_cidade.NEXTVAL,p_NOME);
END Insert_Cidades;
/
This is in pl/slq oracle.
MERGE INTO CIDADE
USING (SELECT p_NOME as NOME FROM DUAL) x
ON (x.NOME = NOME_CIDADE)
WHEN NOT MATCHED THEN
INSERT (COD_CIDADE, NOME_CIDADE)
VALUES (seq_id_cidade.NEXTVAL, p_NOME)
or
INSERT INTO CIDADE
SELECT
seq_id_cidade.NEXTVAL,
p_NOME
FROM
dual
WHERE NOT EXISTS (SELECT 'x' FROM CIDADE WHERE NOME_CIDADE = p_NOME)
Note that the comparison NOME_CIDADE = p_NOME is case sensitive, meaning that you can still insert 'John', 'john', 'JOHN' and 'jOHN'. If you don't want that, change it to something like upper(NOME_CIDADE) = upper(p_NOME) or nlssort(NOME_CIDADE) = nlssort(p_NOME).

SQL to insert only if a certain value is NOT already present in the table?

How to insert a number into the table, only if the table does not already have that number in it?
I am looking for specific SQL code, not really sure how to approach this. Tried several things, nothing's working.
EDIT
Table looks like this:
PK ID Value
1 4 500
2 9 3
So if I am trying to INSERT (ID, Value) VALUES (4,100) it should not try to do it!
If ID is supposed to be unique, there should be a unique constraint defined on the table. That will throw an error if you try to insert a value that already exists
ALTER TABLE table_name
ADD CONSTRAINT uk_id UNIQUE( id );
You can catch the error and do whatever you'd like if an attempt is made to insert a duplicate key-- anything from ignoring the error to logging and re-raising the exception to raising a custom exception
BEGIN
INSERT INTO table_name( id, value )
VALUES( 4, 100 );
EXCEPTION
WHEN dup_val_on_index
THEN
<<do something>>
END;
You can also code the INSERT so that it inserts 0 rows (you would still want the unique constraint in place both from a data model standpoint and because it gives the optimizer more information and may make future queries more efficient)
INSERT INTO table_name( id, value )
SELECT 4, 100
FROM dual
WHERE NOT EXISTS(
SELECT 1
FROM table_name
WHERE id = 4 )
Or you could code a MERGE instead so that you update the VALUE column from 500 to 100 rather than inserting a new row.
Try MERGE statement:
MERGE INTO tbl USING
(SELECT 4 id, 100 value FROM dual) data
ON (data.id = tbl.id)
WHEN NOT MATCHED THEN
INSERT (id, value) VALUES (data.id, data.value)
INSERT INTO YOUR_TABLE (YOUR_FIELD)
SELECT '1' FROM YOUR_TABLE YT WHERE YT.YOUR_FIELD <> '1' LIMIT 1
Of course, that '1' will be your number or your variable.
You can use INSERT + SELECT to solve this problem.

Oracle: how to UPSERT (update or insert into a table?)

The UPSERT operation either updates or inserts a row in a table, depending if the table already has a row that matches the data:
if table t has a row exists that has key X:
update t set mystuff... where mykey=X
else
insert into t mystuff...
Since Oracle doesn't have a specific UPSERT statement, what's the best way to do this?
The MERGE statement merges data between two tables. Using DUAL
allows us to use this command. Note that this is not protected against concurrent access.
create or replace
procedure ups(xa number)
as
begin
merge into mergetest m using dual on (a = xa)
when not matched then insert (a,b) values (xa,1)
when matched then update set b = b+1;
end ups;
/
drop table mergetest;
create table mergetest(a number, b number);
call ups(10);
call ups(10);
call ups(20);
select * from mergetest;
A B
---------------------- ----------------------
10 2
20 1
The dual example above which is in PL/SQL was great becuase I wanted to do something similar, but I wanted it client side...so here is the SQL I used to send a similar statement direct from some C#
MERGE INTO Employee USING dual ON ( "id"=2097153 )
WHEN MATCHED THEN UPDATE SET "last"="smith" , "name"="john"
WHEN NOT MATCHED THEN INSERT ("id","last","name")
VALUES ( 2097153,"smith", "john" )
However from a C# perspective this provide to be slower than doing the update and seeing if the rows affected was 0 and doing the insert if it was.
An alternative to MERGE (the "old fashioned way"):
begin
insert into t (mykey, mystuff)
values ('X', 123);
exception
when dup_val_on_index then
update t
set mystuff = 123
where mykey = 'X';
end;
Another alternative without the exception check:
UPDATE tablename
SET val1 = in_val1,
val2 = in_val2
WHERE val3 = in_val3;
IF ( sql%rowcount = 0 )
THEN
INSERT INTO tablename
VALUES (in_val1, in_val2, in_val3);
END IF;
insert if not exists
update:
INSERT INTO mytable (id1, t1)
SELECT 11, 'x1' FROM DUAL
WHERE NOT EXISTS (SELECT id1 FROM mytble WHERE id1 = 11);
UPDATE mytable SET t1 = 'x1' WHERE id1 = 11;
None of the answers given so far is safe in the face of concurrent accesses, as pointed out in Tim Sylvester's comment, and will raise exceptions in case of races. To fix that, the insert/update combo must be wrapped in some kind of loop statement, so that in case of an exception the whole thing is retried.
As an example, here's how Grommit's code can be wrapped in a loop to make it safe when run concurrently:
PROCEDURE MyProc (
...
) IS
BEGIN
LOOP
BEGIN
MERGE INTO Employee USING dual ON ( "id"=2097153 )
WHEN MATCHED THEN UPDATE SET "last"="smith" , "name"="john"
WHEN NOT MATCHED THEN INSERT ("id","last","name")
VALUES ( 2097153,"smith", "john" );
EXIT; -- success? -> exit loop
EXCEPTION
WHEN NO_DATA_FOUND THEN -- the entry was concurrently deleted
NULL; -- exception? -> no op, i.e. continue looping
WHEN DUP_VAL_ON_INDEX THEN -- an entry was concurrently inserted
NULL; -- exception? -> no op, i.e. continue looping
END;
END LOOP;
END;
N.B. In transaction mode SERIALIZABLE, which I don't recommend btw, you might run into
ORA-08177: can't serialize access for this transaction exceptions instead.
I'd like Grommit answer, except it require dupe values. I found solution where it may appear once: http://forums.devshed.com/showpost.php?p=1182653&postcount=2
MERGE INTO KBS.NUFUS_MUHTARLIK B
USING (
SELECT '028-01' CILT, '25' SAYFA, '6' KUTUK, '46603404838' MERNIS_NO
FROM DUAL
) E
ON (B.MERNIS_NO = E.MERNIS_NO)
WHEN MATCHED THEN
UPDATE SET B.CILT = E.CILT, B.SAYFA = E.SAYFA, B.KUTUK = E.KUTUK
WHEN NOT MATCHED THEN
INSERT ( CILT, SAYFA, KUTUK, MERNIS_NO)
VALUES (E.CILT, E.SAYFA, E.KUTUK, E.MERNIS_NO);
I've been using the first code sample for years. Notice notfound rather than count.
UPDATE tablename SET val1 = in_val1, val2 = in_val2
WHERE val3 = in_val3;
IF ( sql%notfound ) THEN
INSERT INTO tablename
VALUES (in_val1, in_val2, in_val3);
END IF;
The code below is the possibly new and improved code
MERGE INTO tablename USING dual ON ( val3 = in_val3 )
WHEN MATCHED THEN UPDATE SET val1 = in_val1, val2 = in_val2
WHEN NOT MATCHED THEN INSERT
VALUES (in_val1, in_val2, in_val3)
In the first example the update does an index lookup. It has to, in order to update the right row. Oracle opens an implicit cursor, and we use it to wrap a corresponding insert so we know that the insert will only happen when the key does not exist. But the insert is an independent command and it has to do a second lookup. I don't know the inner workings of the merge command but since the command is a single unit, Oracle could execute the correct insert or update with a single index lookup.
I think merge is better when you do have some processing to be done that means taking data from some tables and updating a table, possibly inserting or deleting rows. But for the single row case, you may consider the first case since the syntax is more common.
A note regarding the two solutions that suggest:
1) Insert, if exception then update,
or
2) Update, if sql%rowcount = 0 then insert
The question of whether to insert or update first is also application dependent. Are you expecting more inserts or more updates? The one that is most likely to succeed should go first.
If you pick the wrong one you will get a bunch of unnecessary index reads. Not a huge deal but still something to consider.
Try this,
insert into b_building_property (
select
'AREA_IN_COMMON_USE_DOUBLE','Area in Common Use','DOUBLE', null, 9000, 9
from dual
)
minus
(
select * from b_building_property where id = 9
)
;
From http://www.praetoriate.com/oracle_tips_upserts.htm:
"In Oracle9i, an UPSERT can accomplish this task in a single statement:"
INSERT
FIRST WHEN
credit_limit >=100000
THEN INTO
rich_customers
VALUES(cust_id,cust_credit_limit)
INTO customers
ELSE
INTO customers SELECT * FROM new_customers;