INSERT ALL - report which insert cause exception - sql

I have to find any way to report broken insert in INSERT ALL clause.
I need know about what line exactly is invalid and insert it into special log table.
Is possible to do this?
I'm trying wrote trigger on table before and after insert but it not working if any exception is throwed.
Another idea is write procedure which convert INSERT ALL to single INSERTs and executes it in loop, after this catch exception, but I have a troubles with realization of this idea.

Yes, you can use the DML error logging clause
INSERT INTO dw_empl
SELECT employee_id, first_name, last_name, hire_date, salary, department_id
FROM employees
WHERE hire_date > sysdate - 7
LOG ERRORS INTO err_empl ('daily_load') REJECT LIMIT 25
Full details here:
https://docs.oracle.com/cd/B28359_01/server.111/b28310/tables004.htm

here is one more option
save exceptions with "save exceptions" clause
forall i in v_data_list.first .. v_data_list.last save exceptions
insert into original_table values v_data_list (i);
and then iterate through saved exceptions
exception
when others then
if sqlcode = -24381 then
for indx in 1 .. sql%bulk_exceptions.count loop
Pkg_log_err.log_error(p_Sqlcode => sql%bulk_exceptions(indx).error_code,
p_Sqlerrm => sqlerrm(-sql%bulk_exceptions(indx).error_code));
end loop;

Related

Handling a "too many values" exception in PLSQL

I need to handle the "too many values" exception in a PLSQL program.
The table is:
Table structure of emptest
But I am not able to handle the exception and the program always shows a pre-defined error.
Anybody has any ideas?
I tried like this:
enter image description here
Not getting the user defined message.
The issue is that, in your table you have only four columns.
ENO
ENAME
ESAL
DEPTNAME
but you are tying to insert five column values.
insert into emptest values(v_eno, v_ename, v_esal, v_deptname, v_deptno);
Why are you inserting v_deptno when it is not present in the table?
Either alter your table and add one more column or update the code to insert four column value only.
What you are trying will not work. The pl/sql block will not compile because the sql statement is invalid - it has too many values.
I think you want to trap the error and show your own error message. But, you need to use EXECUTE IMMEDIATE in order to allow invalid sql in pl/sql.
To trap a specific oracle error with a custom message, use EXCEPTION_INIT
Here is an example:
create table testa (col1 NUMBER);
set serveroutput on size 999999
clear screen
DECLARE
-- unnamed system exception
too_many_values EXCEPTION;
PRAGMA EXCEPTION_INIT (too_many_values, -00913);
c_too_many_values VARCHAR2(512) := 'Dude, too many values !';
BEGIN
BEGIN
EXECUTE IMMEDIATE 'INSERT INTO testa VALUES (1,2)';
EXCEPTION WHEN too_many_values THEN
dbms_output.put_line(c_too_many_values);
END;
END;
/
Dude, too many values !
PL/SQL procedure successfully completed.
In you table you only have four columns, but you try to insert five column so error is generated error.
v_deptno is not your table
change your query as per following
Insert Into emptest values(v_eno, v_ename, v_esal, v_deptname);
or add one more column to your table
ALTER TABLE emptest ADD DEPTNO Number(4);

Multiple `ON CONFLICT` in insert query

I have a table which have columns like id, name, start_date, end_date
and 2 unique constraints
unique(id, name, start_date)
and
unique(id, name, end_date)
now when I am writing insert query for this table, I have something like
insert into table (id, name, start_date, end_date)
values (1, 'test', 'example-start-date', 'example-end-date')
on conflict (id, name, start_date) set something
on conflict (id, name, end_date) set something
but getting errors, is this not allowed?
thanks
The answer depends on something.
For DO NOTHING, the answer is simply to use a single clause without specifying the columns:
ON CONFLICT DO NOTHING
That can deal with conflicts on multiple unique constraints.
For DO UPDATE, there is no straightforward solution. The syntax diagram in the documentation shows that you can only have a single ON CONFLICT clause that determines a single unique index.
You could use procedural code to do it in the old-fashioned way, for example in PL/pgSQL:
DECLARE
v_constraint text;
BEGIN
LOOP -- endless loop until INSERT or UPDATE succeeds
BEGIN
INSERT INTO ...;
EXIT; -- leave loop if INSERT succeeds
EXCEPTION
WHEN unique_violation THEN
GET STACKED DIAGNOSTICS v_constraint := CONSTRAINT_NAME;
END;
CASE v_constraint
WHEN 'constraint_name_1' THEN
UPDATE ...;
WHEN 'constraint_name_2' THEN
UPDATE ...;
END CASE;
EXIT WHEN FOUND; -- leave loop if UPDATE changed a row
END LOOP;
END;

trigger to stop the execution of SQl query

CREATE OR REPLACE trigger million_trigger
BEFORE INSERT or UPDATE on employee
FOR EACH ROW
WHEN (new.SALARY>1000000)
DECLARE
txt EXCEPTION;
BEGIN
if INSERTING or UPDATING then
RAISE txt ;
end if;
EXCEPTION
WHEN txt THEN
DBMS_OUTPUT.PUT_LINE('SALARY TOO HIGH');
end;
/
Hello, I created a trigger which is checks if the salary from table employee is greater than 1,000,000. If the salary is greater, then the trigger is supposed to stop the execution of an insert statement from a stored procedure. The trigger has been successfully created but when I insert a record with salary > 1,000,000 nothing happens. ( the record gets inserted - which is not supposed to happen ) Any idea?
You are catching the exception so the trigger doesn't throw an error. Since the trigger doesn't throw an error, the INSERT statement continues and, ultimately, succeeds. If you happen to have enabled serveroutput in your session, you would see the "Salary too high" message but you should never depend on data written to the dbms_output buffer being read by anyone.
If you want to stop the execution of the INSERT statement, you would need to remove your exception handler. Most likely, you would also want to change how you are raising the exception
IF( :new.salary > 1000000 )
THEN
RAISE_APPLICATION_ERROR( -20001, 'Salary too high' );
END IF;

Continuing Inserts in Oracle when exception is raised

I'm working on migration of data from a legacy system into our new app(running on Oracle Database, 10gR2). As part of the migration, I'm working on a script which inserts the data into tables that are used by the app.
The number of rows of data that are imported runs into thousands, and the source data is not clean (unexpected nulls in NOT NULL columns, etc). So while inserting data through the scripts, whenever such an exception occurs, the script ends abruptly, and the whole transaction is rolled back.
Is there a way, by which I can continue inserts of data for which the rows are clean?
Using NVL() or COALESCE() is not an option, as I'd like to log the rows causing the errors so that the data can be corrected for the next pass.
EDIT: My current procedure has an exception handler, I am logging the first row which causes the error. Would it be possible for inserts to continue without termination, because right now on the first handled exception, the procedure terminates execution.
If the data volumes were higher, row-by-row processing in PL/SQL would probably be too slow.
In those circumstances, you can use DML error logging, described here
CREATE TABLE raises (emp_id NUMBER, sal NUMBER
CONSTRAINT check_sal CHECK(sal > 8000));
EXECUTE DBMS_ERRLOG.CREATE_ERROR_LOG('raises', 'errlog');
INSERT INTO raises
SELECT employee_id, salary*1.1 FROM employees
WHERE commission_pct > .2
LOG ERRORS INTO errlog ('my_bad') REJECT LIMIT 10;
SELECT ORA_ERR_MESG$, ORA_ERR_TAG$, emp_id, sal FROM errlog;
ORA_ERR_MESG$ ORA_ERR_TAG$ EMP_ID SAL
--------------------------- -------------------- ------ -------
ORA-02290: check constraint my_bad 161 7700
(HR.SYS_C004266) violated
Using PLSQL you can perform each insert in its own transaction (COMMIT after each) and log or ignore errors with an exception handler that keeps going.
Try this:
for r_row in c_legacy_data loop
begin
insert into some_table(a, b, c, ...)
values (r_row.a, r_row.b, r_row.c, ...);
exception
when others then
null; /* or some extra logging */
end;
end loop;
DECLARE
cursor;
BEGIN
loop for each row in cursor
BEGIN -- subBlock begins
SAVEPOINT startTransaction; -- mark a savepoint
-- do whatever you have do here
COMMIT;
EXCEPTION
ROLLBACK TO startTransaction; -- undo changes
END; -- subBlock ends
end loop;
END;
If you use sqlldr you can specify to continue loading data, and all the 'bad' data will be skipped and logged in a separate file.

How bad is ignoring Oracle DUP_VAL_ON_INDEX exception?

I have a table where I'm recording if a user has viewed an object at least once, hence:
HasViewed
ObjectID number (FK to Object table)
UserId number (FK to Users table)
Both fields are NOT NULL and together form the Primary Key.
My question is, since I don't care how many times someone has viewed an object (after the first), I have two options for handling inserts.
Do a SELECT count(*) ... and if no records are found, insert a new record.
Always just insert a record, and if it throws a DUP_VAL_ON_INDEX exceptions (indicating that there already was such a record), just ignore it.
What's the downside of choosing the second option?
UPDATE:
I guess the best way to put it is : "Is the overhead caused by the exception worse than the overhead caused by the initial select?"
I would normally just insert and trap the DUP_VAL_ON_INDEX exception, as this is the simplest to code. This is more efficient than checking for existence before inserting. I don't consider doing this a "bad smell" (horrible phrase!) because the exception we handle is raised by Oracle - it's not like raising your own exceptions as a flow-control mechanism.
Thanks to Igor's comment I have now run two different benchamrks on this: (1) where all insert attempts except the first are duplicates, (2) where all inserts are not duplicates. Reality will lie somewhere between the two cases.
Note: tests performed on Oracle 10.2.0.3.0.
Case 1: Mostly duplicates
It seems that the most efficient approach (by a significant factor) is to check for existence WHILE inserting:
prompt 1) Check DUP_VAL_ON_INDEX
begin
for i in 1..1000 loop
begin
insert into hasviewed values(7782,20);
exception
when dup_val_on_index then
null;
end;
end loop
rollback;
end;
/
prompt 2) Test if row exists before inserting
declare
dummy integer;
begin
for i in 1..1000 loop
select count(*) into dummy
from hasviewed
where objectid=7782 and userid=20;
if dummy = 0 then
insert into hasviewed values(7782,20);
end if;
end loop;
rollback;
end;
/
prompt 3) Test if row exists while inserting
begin
for i in 1..1000 loop
insert into hasviewed
select 7782,20 from dual
where not exists (select null
from hasviewed
where objectid=7782 and userid=20);
end loop;
rollback;
end;
/
Results (after running once to avoid parsing overheads):
1) Check DUP_VAL_ON_INDEX
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.54
2) Test if row exists before inserting
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.59
3) Test if row exists while inserting
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.20
Case 2: no duplicates
prompt 1) Check DUP_VAL_ON_INDEX
begin
for i in 1..1000 loop
begin
insert into hasviewed values(7782,i);
exception
when dup_val_on_index then
null;
end;
end loop
rollback;
end;
/
prompt 2) Test if row exists before inserting
declare
dummy integer;
begin
for i in 1..1000 loop
select count(*) into dummy
from hasviewed
where objectid=7782 and userid=i;
if dummy = 0 then
insert into hasviewed values(7782,i);
end if;
end loop;
rollback;
end;
/
prompt 3) Test if row exists while inserting
begin
for i in 1..1000 loop
insert into hasviewed
select 7782,i from dual
where not exists (select null
from hasviewed
where objectid=7782 and userid=i);
end loop;
rollback;
end;
/
Results:
1) Check DUP_VAL_ON_INDEX
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.15
2) Test if row exists before inserting
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.76
3) Test if row exists while inserting
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.71
In this case DUP_VAL_ON_INDEX wins by a mile. Note the "select before insert" is the slowest in both cases.
So it appears that you should choose option 1 or 3 according to the relative likelihood of inserts being or not being duplicates.
I don't think there is a downside to your second option. I think it's a perfectly valid use of the named exception, plus it avoids the lookup overhead.
Try this?
SELECT 1
FROM TABLE
WHERE OBJECTID = 'PRON_172.JPG' AND
USERID='JCURRAN'
It should return 1, if there is one there, otherwise NULL.
In your case, it looks safe to ignore, but for performance, one should avoid exceptions on the common path. A question to ask, "How common will the exceptions be?"
Few enough to ignore? or so many another method should be used?
IMHO it is best to go with Option 2: Other than what is already been said, you should consider thread safety. If you go with option 1 and If multiple threads are executing your PL/SQL block then its possible that two or more threads fire select at the same time and at that time there is no record, this will end up leading all threads to insert and you will get unique constraint error.
Usually, exception handling is slower; however if it would happen only seldom, then you would avoid the overhead of the query.
I think it mainly depends on the frequency of the exception, but if performance is important, I would suggest some benchmarking with both approaches.
Generally speaking, treating common events as exception is a bad smell; for this reason you could see also from another point of view.
If it is an exception, then it should be treated as an exception - and your approach is correct.
If it is a common event, then you should try to explicitly handle it - and then checking if the record is already inserted.