Continuing Inserts in Oracle when exception is raised - sql

I'm working on migration of data from a legacy system into our new app(running on Oracle Database, 10gR2). As part of the migration, I'm working on a script which inserts the data into tables that are used by the app.
The number of rows of data that are imported runs into thousands, and the source data is not clean (unexpected nulls in NOT NULL columns, etc). So while inserting data through the scripts, whenever such an exception occurs, the script ends abruptly, and the whole transaction is rolled back.
Is there a way, by which I can continue inserts of data for which the rows are clean?
Using NVL() or COALESCE() is not an option, as I'd like to log the rows causing the errors so that the data can be corrected for the next pass.
EDIT: My current procedure has an exception handler, I am logging the first row which causes the error. Would it be possible for inserts to continue without termination, because right now on the first handled exception, the procedure terminates execution.

If the data volumes were higher, row-by-row processing in PL/SQL would probably be too slow.
In those circumstances, you can use DML error logging, described here
CREATE TABLE raises (emp_id NUMBER, sal NUMBER
CONSTRAINT check_sal CHECK(sal > 8000));
EXECUTE DBMS_ERRLOG.CREATE_ERROR_LOG('raises', 'errlog');
INSERT INTO raises
SELECT employee_id, salary*1.1 FROM employees
WHERE commission_pct > .2
LOG ERRORS INTO errlog ('my_bad') REJECT LIMIT 10;
SELECT ORA_ERR_MESG$, ORA_ERR_TAG$, emp_id, sal FROM errlog;
ORA_ERR_MESG$ ORA_ERR_TAG$ EMP_ID SAL
--------------------------- -------------------- ------ -------
ORA-02290: check constraint my_bad 161 7700
(HR.SYS_C004266) violated

Using PLSQL you can perform each insert in its own transaction (COMMIT after each) and log or ignore errors with an exception handler that keeps going.

Try this:
for r_row in c_legacy_data loop
begin
insert into some_table(a, b, c, ...)
values (r_row.a, r_row.b, r_row.c, ...);
exception
when others then
null; /* or some extra logging */
end;
end loop;

DECLARE
cursor;
BEGIN
loop for each row in cursor
BEGIN -- subBlock begins
SAVEPOINT startTransaction; -- mark a savepoint
-- do whatever you have do here
COMMIT;
EXCEPTION
ROLLBACK TO startTransaction; -- undo changes
END; -- subBlock ends
end loop;
END;

If you use sqlldr you can specify to continue loading data, and all the 'bad' data will be skipped and logged in a separate file.

Related

Why inside of a trigger the code before raise_application_error isn't executed?

IF I create this trigger, then the error is raised when drop or truncate is used on tables, but there is nothing inserted into logTable, but if I delete RAISE_APPLICATION_ERROR... then the values are inserted into logTable, but the drop/truncate are executed too. Why? How can I avoid drop/truncate on Schema (If I use instead of trigger, it is fired only if owner of the schema is dropping/truncating something).
CREATE OR REPLACE TRIGGER trigger_name
BEFORE DROP OR TRUNCATE ON DATABASE
DECLARE
username varchar2(100);
BEGIN
IF ora_dict_obj_owner = 'MySchema' THEN
select user INTO username from dual;
INSERT INTO logTable VALUES(username , SYSDATE);
RAISE_APPLICATION_ERROR (-20001,'ERROR, YOU CAN NOT DELETE THIS!!');
END IF;
END;
According to the documentation:
Statement-Level Atomicity
Oracle Database supports statement-level atomicity, which means that a SQL statement is an atomic unit of work
and either completely succeeds or completely fails.
A successful statement is different from a committed transaction. A
single SQL statement executes successfully if the database parses and
runs it without error as an atomic unit, as when all rows are changed
in a multirow update.
If a SQL statement causes an error during execution, then it is not
successful and so all effects of the statement are rolled back. This
operation is a statement-level rollback.
The procedure is a PL/SQL statement, it is atomic, if you raise an error within the procedure, then the whole procedure fails and Oracle performs a rollback of all the changes done by this procedure.
But you can create a procedure with AUTONOMOUS_TRANSACTION Pragma in order to bypass this behaviour, in this way:
CREATE TABLE logtable(
username varchar2(200),
log_date date
);
CREATE OR REPLACE PROCEDURE log_message( username varchar2 ) IS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
INSERT INTO logtable( username, log_date ) VALUES ( username, sysdate );
COMMIT;
END;
/
CREATE OR REPLACE TRIGGER trigger_name
BEFORE DROP OR TRUNCATE ON DATABASE
DECLARE
username varchar2(100);
BEGIN
IF ora_dict_obj_owner = 'TEST' THEN
log_message( user );
RAISE_APPLICATION_ERROR (-20001,'ERROR, YOU CAN NOT DELETE THIS!!');
END IF;
END;
And now:
drop table table1;
ORA-00604: error occurred at recursive SQL level 1
ORA-20001: ERROR, YOU CAN NOT DELETE THIS!!
ORA-06512: at line 6
00604. 00000 - "error occurred at recursive SQL level %s"
*Cause: An error occurred while processing a recursive SQL statement
(a statement applying to internal dictionary tables).
*Action: If the situation described in the next error on the stack
can be corrected, do so; otherwise contact Oracle Support.
select * from logtable;
USERNAME LOG_DATE
-------- -------------------
TEST 2018-04-27 00:16:34

trigger to stop the execution of SQl query

CREATE OR REPLACE trigger million_trigger
BEFORE INSERT or UPDATE on employee
FOR EACH ROW
WHEN (new.SALARY>1000000)
DECLARE
txt EXCEPTION;
BEGIN
if INSERTING or UPDATING then
RAISE txt ;
end if;
EXCEPTION
WHEN txt THEN
DBMS_OUTPUT.PUT_LINE('SALARY TOO HIGH');
end;
/
Hello, I created a trigger which is checks if the salary from table employee is greater than 1,000,000. If the salary is greater, then the trigger is supposed to stop the execution of an insert statement from a stored procedure. The trigger has been successfully created but when I insert a record with salary > 1,000,000 nothing happens. ( the record gets inserted - which is not supposed to happen ) Any idea?
You are catching the exception so the trigger doesn't throw an error. Since the trigger doesn't throw an error, the INSERT statement continues and, ultimately, succeeds. If you happen to have enabled serveroutput in your session, you would see the "Salary too high" message but you should never depend on data written to the dbms_output buffer being read by anyone.
If you want to stop the execution of the INSERT statement, you would need to remove your exception handler. Most likely, you would also want to change how you are raising the exception
IF( :new.salary > 1000000 )
THEN
RAISE_APPLICATION_ERROR( -20001, 'Salary too high' );
END IF;

Will moving a table/partition to a different tablespace interrupt queries accessing said table/partition?

As the title says: will moving (without renaming) a table/partition while it's being accessed have negative consequences on any queries accessing it?
For example, say there's a long-running SELECT COUNT(*) FROM some_table.
If I were to ALTER TABLE some_table MOVE TABLESPACE some_other_tablespace, would the SELECT fail?
Would it complete, but have incorrect results? Maybe something else entirely?
The only info I could find was that moving the table to a different tablespace would require rebuilding the indices under certain circumstances, but none made mention of what happens to any active queries.
It might fail with ORA-08103: object no longer exists.
In Oracle, readers and writers do not block each other. Which means DML and queries will not interfere with each other, excluding a few weird cases like running out of UNDO space. But moving a tablespace, or any type of ALTER or other DDL statement, is not a normal write. The multiversion concurrency control model breaks down when you run DDL, at least for the involved objects, and weird things start to happen.
Testing a large move is difficult, but you can reproduce these errors by looping through a lot of small alters and queries. In case you think this is only a theoretical issue, I have seen these errors occur in real-life, on a production database.
Warning: infinite loops below since I can't predict how long it will take to reproduce this error. But it usually only takes me tens of seconds.
--Create sample table.
drop table test1 purge;
create table test1(a number, b number)
partition by list(a) (partition p1 values(1), partition p2 values(2))
nologging tablespace users;
--Session 1
begin
loop
execute immediate '
insert /*+ append */ into test1 select mod(level,2)+1, level
from dual connect by level <= 100000';
commit;
execute immediate 'alter table test1 move partition p1 tablespace users';
end loop;
end;
/
--Session 2: Read from moved partition
declare
v_count number;
begin
loop
select count(*) into v_count from test1 where a = 1;
end loop;
end;
/
--Session 3: Read from unmoved partition
declare
v_count number;
begin
loop
select count(*) into v_count from test1 where a = 2;
end loop;
end;
/
Session 2 will eventually die with:
ORA-08103: object no longer exists
ORA-06512: at line 6
Session 3 will not fail, it is not querying an altered partition. Each partition has its own segment, and is a separate object that can potentially "no longer exist".
A partition move WILL cause DML to fail if it is accessing the parition.
I just proved it :-/.
I was doing a large delete and tried to move a partition (alter table move partition).
The error I received was:
ORA-12801: error signaled in parallel query server P004ORA-08103: object no longer exists

how to create a trigger in oracle which will ensure that no records is deleted if satrt date is earlier than current date?

I have a table Movie.
create table Movie
(
mv_id number(5),
mv_name varchar2(30),
startdate date
)
insert into Movie values(1,'AVATAR','8-MAR-2012');
insert into Movie values(2,'MI3','20-MAR-2012');
insert into Movie values(3,'BAD BOYS','10-Feb-2012');
I want to create a trigger which will ensure that no records is deleted if satrt date is earlier than current date.
My trigger code is --
create or replace trigger trg_1
before delete
on Movie
for each row
when(old.startdate < sysdate)
begin
raise_application_error(-20001, 'Records can not be deleted');
end;
The trigger is created.When I execute this code to delete data --
delete from Movie where mv_id=1;
Then the trigger fires but with errors, I don't know why I am getting such error.
I dont want any error, i want only message.
This is the error --
delete from Movie where mv_id=1
*
ERROR at line 1:
ORA-20010: Records can not be deleted
ORA-06512: at "OPS$0924769.TRG_1", line 3
ORA-04088: error during execution of trigger 'OPS$0924769.TRG_1'
I want to get rid of this error.
The only way for a trigger on a table to prevent any operation is throw an exception. You cannot have a trigger that prevents a DELETE and prints a message.
You can write a trigger that attempts to display a message and that allows the delete to happen
create or replace trigger trg_1
before delete
on Movie
for each row
when(old.startdate < sysdate)
begin
dbms_output.put_line( 'Records can not be deleted');
end;
If the client happens to be configured to display the data written to the DBMS_OUTPUT buffer (most clients will not), this will display a message. But it will also allow the delete to be successful which does not sound like what you want.
Add exception handling in your delete statement.
All you need to do is write the delete statement in its own BEGIN and END block, and handle the exception there which is thrown from trigger. No row would be deleted if it matches the condition; however, the message to deny record deletion would be displayed using DBMS_OUTPUT command instead.
The below code does what you want:
BEGIN
DELETE FROM MOVIE WHERE MV_ID = 1;
EXCEPTION
WHEN OTHERS
THEN DBMS_OUTPUT.PUT_LINE('Records can not be deleted.');
END;
You can check the message in Output window. Below is the output I got in PL/SQL developer after executing above block of query.
Records can not be deleted.
I hope it helps.

How bad is ignoring Oracle DUP_VAL_ON_INDEX exception?

I have a table where I'm recording if a user has viewed an object at least once, hence:
HasViewed
ObjectID number (FK to Object table)
UserId number (FK to Users table)
Both fields are NOT NULL and together form the Primary Key.
My question is, since I don't care how many times someone has viewed an object (after the first), I have two options for handling inserts.
Do a SELECT count(*) ... and if no records are found, insert a new record.
Always just insert a record, and if it throws a DUP_VAL_ON_INDEX exceptions (indicating that there already was such a record), just ignore it.
What's the downside of choosing the second option?
UPDATE:
I guess the best way to put it is : "Is the overhead caused by the exception worse than the overhead caused by the initial select?"
I would normally just insert and trap the DUP_VAL_ON_INDEX exception, as this is the simplest to code. This is more efficient than checking for existence before inserting. I don't consider doing this a "bad smell" (horrible phrase!) because the exception we handle is raised by Oracle - it's not like raising your own exceptions as a flow-control mechanism.
Thanks to Igor's comment I have now run two different benchamrks on this: (1) where all insert attempts except the first are duplicates, (2) where all inserts are not duplicates. Reality will lie somewhere between the two cases.
Note: tests performed on Oracle 10.2.0.3.0.
Case 1: Mostly duplicates
It seems that the most efficient approach (by a significant factor) is to check for existence WHILE inserting:
prompt 1) Check DUP_VAL_ON_INDEX
begin
for i in 1..1000 loop
begin
insert into hasviewed values(7782,20);
exception
when dup_val_on_index then
null;
end;
end loop
rollback;
end;
/
prompt 2) Test if row exists before inserting
declare
dummy integer;
begin
for i in 1..1000 loop
select count(*) into dummy
from hasviewed
where objectid=7782 and userid=20;
if dummy = 0 then
insert into hasviewed values(7782,20);
end if;
end loop;
rollback;
end;
/
prompt 3) Test if row exists while inserting
begin
for i in 1..1000 loop
insert into hasviewed
select 7782,20 from dual
where not exists (select null
from hasviewed
where objectid=7782 and userid=20);
end loop;
rollback;
end;
/
Results (after running once to avoid parsing overheads):
1) Check DUP_VAL_ON_INDEX
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.54
2) Test if row exists before inserting
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.59
3) Test if row exists while inserting
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.20
Case 2: no duplicates
prompt 1) Check DUP_VAL_ON_INDEX
begin
for i in 1..1000 loop
begin
insert into hasviewed values(7782,i);
exception
when dup_val_on_index then
null;
end;
end loop
rollback;
end;
/
prompt 2) Test if row exists before inserting
declare
dummy integer;
begin
for i in 1..1000 loop
select count(*) into dummy
from hasviewed
where objectid=7782 and userid=i;
if dummy = 0 then
insert into hasviewed values(7782,i);
end if;
end loop;
rollback;
end;
/
prompt 3) Test if row exists while inserting
begin
for i in 1..1000 loop
insert into hasviewed
select 7782,i from dual
where not exists (select null
from hasviewed
where objectid=7782 and userid=i);
end loop;
rollback;
end;
/
Results:
1) Check DUP_VAL_ON_INDEX
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.15
2) Test if row exists before inserting
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.76
3) Test if row exists while inserting
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.71
In this case DUP_VAL_ON_INDEX wins by a mile. Note the "select before insert" is the slowest in both cases.
So it appears that you should choose option 1 or 3 according to the relative likelihood of inserts being or not being duplicates.
I don't think there is a downside to your second option. I think it's a perfectly valid use of the named exception, plus it avoids the lookup overhead.
Try this?
SELECT 1
FROM TABLE
WHERE OBJECTID = 'PRON_172.JPG' AND
USERID='JCURRAN'
It should return 1, if there is one there, otherwise NULL.
In your case, it looks safe to ignore, but for performance, one should avoid exceptions on the common path. A question to ask, "How common will the exceptions be?"
Few enough to ignore? or so many another method should be used?
IMHO it is best to go with Option 2: Other than what is already been said, you should consider thread safety. If you go with option 1 and If multiple threads are executing your PL/SQL block then its possible that two or more threads fire select at the same time and at that time there is no record, this will end up leading all threads to insert and you will get unique constraint error.
Usually, exception handling is slower; however if it would happen only seldom, then you would avoid the overhead of the query.
I think it mainly depends on the frequency of the exception, but if performance is important, I would suggest some benchmarking with both approaches.
Generally speaking, treating common events as exception is a bad smell; for this reason you could see also from another point of view.
If it is an exception, then it should be treated as an exception - and your approach is correct.
If it is a common event, then you should try to explicitly handle it - and then checking if the record is already inserted.