I have to insert some data in oracle DB, without previously checking if it already exist.
Does exist any way, transiction on oracle to catch the exception inside the query and handle it to don't return any exception?
It would be perfect something in mysql's style like
insert .... on duplicate key a=a
You can use MERGE. The syntax is a bit different from a regular insert though;
MERGE INTO test USING (
SELECT 1 AS id, 'Test#1' AS value FROM DUAL -- your row to insert here
) t ON (test.id = t.id) -- duplicate check
WHEN NOT MATCHED THEN
INSERT (id, value) VALUES (t.id, t.value); -- insert if no duplicate
An SQLfiddle to test with.
If you can use PL/SQL, and you have a unique index on the columns where you don't want any duplicates, then you can catch the exception and ignore it:
begin
insert into your_table (your_col) values (your_value);
exception
when dup_val_on_index then null;
end;
Since 11g there is the ignore_row_on_dupkey_index hint that ignores Unique Constraint-exceptions and let the script go on with the next row if there is any, see Link. The exception is not logged. It needs two arguments, the table name and the index name.
INSERT /*+ ignore_row_on_dupkey_index(my_table, my_table_idx) */
INTO my_table(id,name,phone)
VALUES (24,'Joe','+49 19450704');
Related
Is there any way to write an SQL input file for sqlite that would somehow "throw" an error, eg. exited the transaction with rollback, if a condition isn't met?
I have a script that is supposed to do something, but only if there is a certain row in one table. If it's not there, the execution of the script might have fatal results and corrupt the db.
The script is only started on demand right now, but I would prefer to add a fail-safe which would prevent its execution in case there is some issue.
Basically what I need is something like
/* IF */ SELECT value FROM meta WHERE key = 'version' /* != hardcoded_version_string THROW SOME EXCEPTION */
Is there any way to accomplish that? In Postgre / Oracle this could be done using PLSQL but I am not sure if sqlite support any such a thing?
Triggers can use the RAISE function to generate errors:
CREATE VIEW test AS SELECT NULL AS value;
CREATE TRIGGER test_insert
INSTEAD OF INSERT ON test
BEGIN
SELECT RAISE(FAIL, 'wrong value')
WHERE NEW.value != 'fixed_value';
END;
INSERT INTO test SELECT 'fixed_value';
INSERT INTO test SELECT 'whatever';
Error: wrong value
Is there any way to write an SQL input file for sqlite that would
somehow "throw" an error, eg. exited the transaction with rollback, if
a condition isn't met?
One workaround may be to create dummy table and explicitly violate NULL constraint:
CREATE TABLE meta("key" VARCHAR(100));
INSERT INTO meta("key") VALUES ('version');
CREATE TEMPORARY TABLE dummy(col INT NOT NULL);
Transaction:
BEGIN TRANSACTION;
INSERT INTO dummy(col)
SELECT NULL -- explicit insert of NULL
FROM meta
WHERE "key" = 'version';
-- Error: NOT NULL constraint failed: dummy.col
-- rest code
INSERT INTO meta("key")
VALUES ('val1');
INSERT INTO meta("key")
VALUES ('val2');
-- ...
COMMIT;
SqlFiddleDemo
Keep in mind that SQLite is not procedural language and this solution is a bit ugly.
I was wondering what would be the preferred technique in Oracle to copy multiple records into a database that ignored duplicate values on a certain index. the statements are stated explicitly in the statement and don't come from another table
INSERT INTO EXAMPLE (A, B, C, D) VALUES (null,'example1','example2',EXAMPLE_SEQ.nextval);
INSERT INTO EXAMPLE (A, B, C, D) VALUES (null,'example2','example3',EXAMPLE_SEQ.nextval);
INSERT INTO EXAMPLE (A, B, C, D) VALUES (null,'example4','example5',EXAMPLE_SEQ.nextval);
I am currently doing it like this and checking manually, but need to find a way so that these can be handled as scripts
If you've decided to stick with INSERTs you can prevent insertion of duplicate rows by using constraints whether it primary key or unique key. If it happens to violate a unique constraint your script will stop and you'll have to roollback all changes made by previous inserts(unless you have committed every single of them). To handle that exception you could write a similar pls/sql block.
declare
l_unique_exception exception;
pragma exception_init(l_unique_exception, -1);
begin
insert into test(id, test_vector)
values(1, 123);
insert into test(id, test_vector)
values(1, 123);
......
Insert into
commit;
exception
when l_unique_exception
then process the exception;
end;
IN ADDITION
If you want to proceed after one of the inserts raises an exception then the following example might be in handy.
Create a table that going to contain errors. For example.
CREATE TABLE tb_errors (
ErrorTag varchar2(123)
)
Provide an error logging invoking CREATE_ERROR_LOG procedure of DBMS_ERRLOG package
DBMS_ERRLOG.CREATE_ERROR_LOG('YourDmlTable. Test in this case', 'tb_errors');
Add log errors into clause to each insert
Here is an example
declare
begin
insert into test(id, col1)
values(1, 123)
log errors into tb_errors('simple expression') reject limit unlimited;
insert into test(id, col1)
values(1, 123)
log errors into tb_errors('simple expression') reject limit unlimited;
insert into test(id, col1)
values(1, 123)
log errors into tb_errors('simple expression') reject limit unlimited;
commit;
end;
After your script is completed you can query error logging table, tb_errors in this case, to see what went wrong.
You should look at the MERGE syntax.
http://en.wikipedia.org/wiki/Merge_(SQL)
merge example target
using (select 1 as id, 'a' as val) as source
on source.id = target.id
and source.val = target.val
when not matched then
insert (id, val) values (source.id, source.val);
I suggest you to use LOG error clause if you have a goal to provide additional treatment of incorrect data. Please consider http://www.oracle-base.com/articles/10g/dml-error-logging-10gr2.php - good example is there.
I'm trying to insert 40 rows using an INSERT ALL INTO and I'm not certain on how to insert the surrogate key. Here's what I have
BEGIN
INSERT ALL
INTO question(question_id)
VALUES (question_seq.nextval)
END
Now if I add another INTO VALUES then I get a unique constraint violation.
BEGIN
INSERT ALL
INTO question(question_id)
VALUES (question_seq.nextval)
INTO question(question_id)
VALUES (question_seq.nextval)
END
How can I update the sequences nextval value for each INTO VALUES so that I can avoid the unique constraint violation? I assumed that nextval would automatically update itself.
UPDATE: I don't know if this is the best way to handle this but here's the solution I came up with:
first I created a function that returns a value
then I called that function in the id field of the VALUES clause
create or replace
FUNCTION GET_QUESTION_ID RETURN NUMBER AS
num NUMBER;
BEGIN
SELECT UHCL_QUESTIONS_SEQ.nextval
INTO num
FROM dual;
return num;
END GET_QUESTION_ID;
INSERT ALL
INTO question(question_id)
VALUES (GET_QUESTION_ID())
INTO question(question_id)
VALUES (GET_QUESTION_ID())
Being from a SQL Server background, I've always created a trigger on the table to basically emulate IDENTITY functionality. Once the trigger is on, the SK is automatically generated from the sequence just like identity and you don't have to worry about it.
You can use something like this:
insert into question(question_id)
select question_seq.nextval from
(
select level from dual connect by level <= 40
);
Although it's not a very convenient format, especially if there are other columns you want to add. You'd probably need to create another UNION ALL query, and join it by the LEVEL or ROWNUM.
My first thought was to do something like this:
insert into question(question_id)
select question_seq.nextval value from dual
union all
select question_seq.nextval from dual;
But it generates ORA-02287: sequence number not allowed here, due to the restrictions on sequence values.
By the way, are you sure your INSERT ALL works without a subquery? I get the error ORA-00928: missing SELECT keyword, and the diagram from the 11.2 manual implies there must be a subquery:
I don't know if this is the best way to handle this but here's the solution I came up with:
first I created a function that returns a value
then I called that function in the id field of the VALUES clause
create or replace
FUNCTION GET_QUESTION_ID RETURN NUMBER AS
num NUMBER;
BEGIN
SELECT UHCL_QUESTIONS_SEQ.nextval
INTO num
FROM dual;
return num;
END GET_QUESTION_ID;
INSERT ALL
INTO question(question_id)
VALUES (GET_QUESTION_ID())
INTO question(question_id)
VALUES (GET_QUESTION_ID())
In an Oracle table (e.g. MYTABLE, with a numeric sequenced field as primary key), I have to insert several thousand of rows, but some of them are supposed to already exist in the table.
Naturally, I should try to use MERGE but I need, as well, to retrieve all created (when inserting) and existing (when updating) primary keys.
As well, it should be as fast as possible.
Is the following attempt (pseudo code) the only way to go? Thanks.
keys_list = empty array
for each row to merge
do query 'SELECT PK_MYTABLE FROM MYTABLE WHERE PK_MYTABLE = '+row.pk_mytable
==> retrieve key
if found then:
add key to keys_list
else:
do query 'INSERT INTO MYTABLE (PK_MYTABLE, ...) VALUES (SEQ_MYTABLE.NEXTVAL, ...)'
do query 'SELECT SEQ_MYTABLE.CURRVAL FROM DUAL' ==> retrieve key
add key to keys_list
Add a MODIFICATION_DATE column to the table
Grab and save the sysdate.
When you merge update/insert the value of the sysdate as well.
When the merge is complete, select the rows where the MODIFICATION_DATE = SYSDATE and you
have the set you are interested in.
Why can't you use a MERGE statement for this? This is exactly what a MERGE is for. Here is a rough idea of how it would look...
merge into mytable mt
using
(
select key_field, value_field from sourcetable
) st
on
( mt.key_field = st.key_field )
when matched then update
set mt.value_field = st.value_field
when not matched then insert
( key_field, value_field )
values
( st.key_field, st.value_field )
;
Using a MERGE statement is fast because it is a single statement and the Oracle optimizer can utilize indexes and choose a better explain path than iterating through a cursor using PL/SQL.
If the keys are being generated from a sequence, then the normal way to get the key generated by that insert is to use the returning clause:
declare
v_insert_seq integer;
begin
insert into t1 (pk, c1)
values (myseq.nextval, 'value') returning pk into v_insert_seq;
end;
/
However, as best as I can tell, the merge statement doesn't support that returning feature.
Depending on the source of your new rows, there are different ways you could do this. If you are inserting one row at a time, then the approach above will work pretty well.
To detect the duplicate records, just catch the exceptions when you are inserting (when dup_val_on_index) and then handle them with updates.
If your source of rows is another table, you probably want to look at bulk inserts, and allowing Oracle to return you an array of new PK values. I tried this, but couldn't get it working, so perhaps it's not supported (or I'm missing something today - it gives a syntax error):
declare
type t_type is table of t1.pk%type;
v_insert_seqs t_type;
begin
insert into t1 (pk, c1)
select level newpk, 'value' c1value
from dual
connect by level <= 10 returning pk bulk collect into v_insert_seqs;
exception
when dup_val_on_index then
raise;
end;
/
The next best thing is to select the rows into arrays and then use bulk binds with the returning clause to capture the new PK IDs and also use Save Exceptions to catch all the rows that failed to inserted. Then you can process any of the failed inserted afterwards:
set serveroutput on
declare
type t_pk is table of t1.pk%type;
type t_c1 is table of t1.c1%type;
v_pks t_pk;
v_c1s t_c1;
v_new_pks t_pk;
ex_dml_errors EXCEPTION;
PRAGMA EXCEPTION_INIT(ex_dml_errors, -24381);
begin
-- get the batch of rows you want to insert
select level newpk, 'value' c1
bulk collect into v_pks, v_c1s
from dual connect by level <= 10;
-- bulk bind insert, saving exceptions and capturing the newly inserted
-- records
forall i in v_pks.first .. v_pks.last save exceptions
insert into t1 (pk, c1)
values (v_pks(i), v_c1s(i)) returning pk bulk collect into v_new_pks;
exception
-- Process the exceptions
when ex_dml_errors then
for i in 1..SQL%BULK_EXCEPTIONS.count loop
DBMS_OUTPUT.put_line('Error: ' || i ||
' Array Index: ' || SQL%BULK_EXCEPTIONS(i).error_index ||
' Message: ' || SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE));
end loop;
end;
/
If you are running Oracle 10 or better, you might be able to do much the same thing, for nearly free by issuing a commit before the merge to update the SCN, then after the merge,
use the ORA_ROWSCN to detect which rows have changed.
The UPSERT operation either updates or inserts a row in a table, depending if the table already has a row that matches the data:
if table t has a row exists that has key X:
update t set mystuff... where mykey=X
else
insert into t mystuff...
Since Oracle doesn't have a specific UPSERT statement, what's the best way to do this?
The MERGE statement merges data between two tables. Using DUAL
allows us to use this command. Note that this is not protected against concurrent access.
create or replace
procedure ups(xa number)
as
begin
merge into mergetest m using dual on (a = xa)
when not matched then insert (a,b) values (xa,1)
when matched then update set b = b+1;
end ups;
/
drop table mergetest;
create table mergetest(a number, b number);
call ups(10);
call ups(10);
call ups(20);
select * from mergetest;
A B
---------------------- ----------------------
10 2
20 1
The dual example above which is in PL/SQL was great becuase I wanted to do something similar, but I wanted it client side...so here is the SQL I used to send a similar statement direct from some C#
MERGE INTO Employee USING dual ON ( "id"=2097153 )
WHEN MATCHED THEN UPDATE SET "last"="smith" , "name"="john"
WHEN NOT MATCHED THEN INSERT ("id","last","name")
VALUES ( 2097153,"smith", "john" )
However from a C# perspective this provide to be slower than doing the update and seeing if the rows affected was 0 and doing the insert if it was.
An alternative to MERGE (the "old fashioned way"):
begin
insert into t (mykey, mystuff)
values ('X', 123);
exception
when dup_val_on_index then
update t
set mystuff = 123
where mykey = 'X';
end;
Another alternative without the exception check:
UPDATE tablename
SET val1 = in_val1,
val2 = in_val2
WHERE val3 = in_val3;
IF ( sql%rowcount = 0 )
THEN
INSERT INTO tablename
VALUES (in_val1, in_val2, in_val3);
END IF;
insert if not exists
update:
INSERT INTO mytable (id1, t1)
SELECT 11, 'x1' FROM DUAL
WHERE NOT EXISTS (SELECT id1 FROM mytble WHERE id1 = 11);
UPDATE mytable SET t1 = 'x1' WHERE id1 = 11;
None of the answers given so far is safe in the face of concurrent accesses, as pointed out in Tim Sylvester's comment, and will raise exceptions in case of races. To fix that, the insert/update combo must be wrapped in some kind of loop statement, so that in case of an exception the whole thing is retried.
As an example, here's how Grommit's code can be wrapped in a loop to make it safe when run concurrently:
PROCEDURE MyProc (
...
) IS
BEGIN
LOOP
BEGIN
MERGE INTO Employee USING dual ON ( "id"=2097153 )
WHEN MATCHED THEN UPDATE SET "last"="smith" , "name"="john"
WHEN NOT MATCHED THEN INSERT ("id","last","name")
VALUES ( 2097153,"smith", "john" );
EXIT; -- success? -> exit loop
EXCEPTION
WHEN NO_DATA_FOUND THEN -- the entry was concurrently deleted
NULL; -- exception? -> no op, i.e. continue looping
WHEN DUP_VAL_ON_INDEX THEN -- an entry was concurrently inserted
NULL; -- exception? -> no op, i.e. continue looping
END;
END LOOP;
END;
N.B. In transaction mode SERIALIZABLE, which I don't recommend btw, you might run into
ORA-08177: can't serialize access for this transaction exceptions instead.
I'd like Grommit answer, except it require dupe values. I found solution where it may appear once: http://forums.devshed.com/showpost.php?p=1182653&postcount=2
MERGE INTO KBS.NUFUS_MUHTARLIK B
USING (
SELECT '028-01' CILT, '25' SAYFA, '6' KUTUK, '46603404838' MERNIS_NO
FROM DUAL
) E
ON (B.MERNIS_NO = E.MERNIS_NO)
WHEN MATCHED THEN
UPDATE SET B.CILT = E.CILT, B.SAYFA = E.SAYFA, B.KUTUK = E.KUTUK
WHEN NOT MATCHED THEN
INSERT ( CILT, SAYFA, KUTUK, MERNIS_NO)
VALUES (E.CILT, E.SAYFA, E.KUTUK, E.MERNIS_NO);
I've been using the first code sample for years. Notice notfound rather than count.
UPDATE tablename SET val1 = in_val1, val2 = in_val2
WHERE val3 = in_val3;
IF ( sql%notfound ) THEN
INSERT INTO tablename
VALUES (in_val1, in_val2, in_val3);
END IF;
The code below is the possibly new and improved code
MERGE INTO tablename USING dual ON ( val3 = in_val3 )
WHEN MATCHED THEN UPDATE SET val1 = in_val1, val2 = in_val2
WHEN NOT MATCHED THEN INSERT
VALUES (in_val1, in_val2, in_val3)
In the first example the update does an index lookup. It has to, in order to update the right row. Oracle opens an implicit cursor, and we use it to wrap a corresponding insert so we know that the insert will only happen when the key does not exist. But the insert is an independent command and it has to do a second lookup. I don't know the inner workings of the merge command but since the command is a single unit, Oracle could execute the correct insert or update with a single index lookup.
I think merge is better when you do have some processing to be done that means taking data from some tables and updating a table, possibly inserting or deleting rows. But for the single row case, you may consider the first case since the syntax is more common.
A note regarding the two solutions that suggest:
1) Insert, if exception then update,
or
2) Update, if sql%rowcount = 0 then insert
The question of whether to insert or update first is also application dependent. Are you expecting more inserts or more updates? The one that is most likely to succeed should go first.
If you pick the wrong one you will get a bunch of unnecessary index reads. Not a huge deal but still something to consider.
Try this,
insert into b_building_property (
select
'AREA_IN_COMMON_USE_DOUBLE','Area in Common Use','DOUBLE', null, 9000, 9
from dual
)
minus
(
select * from b_building_property where id = 9
)
;
From http://www.praetoriate.com/oracle_tips_upserts.htm:
"In Oracle9i, an UPSERT can accomplish this task in a single statement:"
INSERT
FIRST WHEN
credit_limit >=100000
THEN INTO
rich_customers
VALUES(cust_id,cust_credit_limit)
INTO customers
ELSE
INTO customers SELECT * FROM new_customers;