How to not get this warning? DB2 Warning SQLSTATE=02000 No row was found for FETCH, UPDATE, or DELETE; or the result of a query is an empty table - sql

The way we have our error handling setup this warning will shoot an email out to everyone when it occurs (I can't change this). I really don't care that no rows were found for this job.
How do I either
A) Check if a row is present before tying to delete it or
B) Circumvent/ignore this warning somehow?
Example of the delete:
delete from schema.table
where key is null;
SQLSTATE=02000 No row error was displayed for FETCH, UPDATE, or DELETE; or the result of a query is an empty table.
I cannot insert a dummy record to delete either.

Maybe this will do
with delete_rows as (
select * from old table (delete from schema.table where key is null)
)
select count(*) from delete_rows

You may try Compound SQL (compiled) statement "eating" this warning.
BEGIN
DECLARE CONTINUE HANDLER FOR NOT FOUND
BEGIN END;
DELETE ...;
END

Related

How can I trigger a delete action on a table with an if-statement?

I'm developing this trigger for a personal project and I'm not sure what I'm doing wrong. The trigger compiles fine and, testing what the trigger is supposed to do, it works ... but the condition isn't working. The trigger just does nothing
Here's my code
CREATE OR REPLACE TRIGGER z_delete
BEFORE INSERT or UPDATE OR INSERT OF status ON TABLE_1
FOR EACH ROW
BEGIN
IF (:new.condition_1 = 'I' or :new.condition_2 ='Z')
THEN
DELETE TABLE_2 WHERE EXISTS
(SELECT * FROM TABLE_1 WHERE TABLE_1.Value_1 = TABLE_2.Value_2 AND TABLE_1.condition_1='I');
DELETE interference_results WHERE EXISTS
(SELECT * FROM TABLE_1 WHERE TABLE_1.Value_1 = TABLE_2.Value_3 AND TABLE_1.condition_1 = 'I');
END IF;
END z_delete;
Anyone has any idea whats going on??
Thanks in advance
Well your question is : "Anyone has any idea whats going on??". So I i wll try to answer that question.
In the begining of your code for the trigger you have some code you do not need:
BEFORE INSERT or UPDATE OR INSERT OF status ON TABLE_1
This will be enough:
BEFORE INSERT or UPDATE OF status ON TABLE_1
Then in your trigger the first DELETE command is ok and the second one is not because the select in the eexist clause is not ok:
CHECK THIS EXAMPLE
To help you more you need to tell me what is your plan with the second DELETE clause. What is it you want to accomplish ? Is the "interference_results" name of the table or ?
Hope this helps for now...
Are you by chance querying the same record you're inserting into TABLE_1? If so, "BEFORE" trigger won't do as the record isn't created yet. Alternatively, use the actual :NEW values in the dependent DML.

BEFORE DELETE trigger

How to stop from deleting a row, that has PK in another table (without FK) with a trigger?
Is CALL cannot_delete_error would stop from deleting?
This is what I've got so far.
CREATE TRIGGER T1
BEFORE DELETE ON Clients
FOR EACH ROW
BEGIN
SELECT Client, Ref FROM Clients K, Invoice F
IF F.Client = K.Ref
CALL cannot_delete_error
END IF;
END
Use an 'INSTEAD OF DELETE' trigger.
Basically, you can evaluate whether or not you should the delete the item. In the trigger you can ultimately decide to delete the item like:
--test to see if you actually should delete it.
--if you do decide to delete it
DELETE FROM MyTable
WHERE ID IN (SELECT ID FROM deleted)
One side note, remember that the 'deleted' table may be for several rows.
Another side note, try to do this outside of the db if possible! Or with a preceding query. Triggers are downright difficult to maintain. A simple query, or function (e.g. dbo.udf_CanIDeleteThis()') can be much more versatile.
If you're using MySQL 5.5 or up you can use SIGNAL
DELIMITER //
CREATE TRIGGER tg_fk_check
BEFORE DELETE ON clients
FOR EACH ROW
BEGIN
IF EXISTS(SELECT *
FROM invoices
WHERE client_id = OLD.client_id) THEN
SIGNAL sqlstate '45000'
SET message_text = 'Cannot delete a parent row: child rows exist';
END IF;
END//
DELIMITER ;
Here is SQLFiddle demo. Uncomment the last delete and click Build Schema to see it in action.

Mutating table exception when selecting max(date column of TABLE_X) in an after update trigger for TABLE_X

I have a trigger somewhat like this (Sorry can't display actual sql because of company rules and this is from a different machine):
create or replace trigger TR_TABLE_X_AU
after update
on TABLE_X
for each row
declare
cursor cursor_select_fk is
select FK_FOR_ANOTHER_TABLE
from TABLE_Y Y, TABLE_Z Z
where :NEW.JOINING_COL = Y.JOINING_COL
and Y.JOINING_COL = Z.JOINING_COL
and :NEW.FILTER_CONDITION_1 = Y.FILTER_CONDITION_1
and :NEW.FILTER_CONDITION_2 = Y.FILTER_CONDITION_2
and :NEW.SOME_DATE_COL = (select max(SOME_DATE_COL)
from TABLE_X
where FILTER_CONDITION_1 = :NEW.FILTER_CONDITION_1
and FILTER_CONDITION_2 = :NEW.FILTER_CONDITION_2)
begin
for rec in cursor_select_fk loop
PCK_SOME_PACKAGE.SOME_PROC(rec.FK_FOR_ANOTHER_TABLE);
end loop;
end TR_TABLE_X_AU;
We resulted to triggers since it is an enhancement. The nested query selecting the max date seems to be the cause of the problem. Changing it to sysdate results to no exceptions. Any idea on how I can get the max date during the execution of the trigger for TABLE_X? Thanks!
EDIT:
Also, it seems similar functions such as count,sum,etc... produces the same error. Anyone knows a workaround to this?
A mutating table is a table that is being modified by an UPDATE,
DELETE, or INSERT statement, or a table that might be updated by the
effects of a DELETE CASCADE constraint.
The session that issued the triggering statement cannot query or
modify a mutating table. This restriction prevents a trigger from
seeing an inconsistent set of data.
Trigger Restrictions on Mutating Tables
Which means, you cannot issue max(some_date_col) on your TABLE_X in a row-level trigger.
Compound trigger could be a possible workaround.

how to create a trigger in oracle which will restrict insertion and update queries on a table based on a condition

I have account table as this--
create table account
(
acct_id int,
cust_id int,
cust_name varchar(20)
)
insert into account values(1,20,'Mark');
insert into account values(2,23,'Tom');
insert into account values(3,24,'Jim');
I want to create a trigger which will ensure that no records can be inserted or update in account table having acct_id as 2 and cust_id as 23.
My code is --
create trigger tri_account
before insert or update
on account
for each row
begin
IF (:new.acct_id == 2 and :new.cust_id == 23) THEN
DBMS_OUTPUT.PUT_LINE('No insertion with id 2 and 23.');
rollback;
END IF;
end;
so this trigger is created , but with compilation error.
now when I insert any record with acct_id as 2 and cust_id as 23,it doesent allow.
But I get an error saying
ORA-04098: trigger 'OPS$0924769.TRI_ACCOUNT' is invalid and failed re-validation
I don't understand this.I also want to show a message that dis insertion is not possible.
please Help...
The equality operator in Oracle is =, not ==.
You cannot commit or rollback in a trigger. You can throw an exception which causes the triggering statement to fail and to be rolled back (though the existing transaction will not necessarily be rolled back).
It does not appear that this trigger compiled successfully when you created it. If you are using SQL*Plus, you can type show errors after creating a PL/SQL object to see the compilation errors.
You should never write code that depends on the caller being able to see the output from DBMS_OUTPUT. Most applications will not so most applications would have no idea that the DML operation failed if your trigger simply tries to write to the DBMS_OUTPUT buffer.
Putting those items together, you can write something like
create trigger tri_account
before insert or update
on account
for each row
begin
IF (:new.acct_id = 2 and :new.cust_id = 23) THEN
raise_application_error( -20001, 'No insertion with id 2 and 23.');
END IF;
end;
A trigger is more flexible, but you can also accomplish this through the use of a CHECK CONSTRAINT:
ALTER TABLE account ADD CONSTRAINT check_account CHECK ( acct_id != 2 OR cust_id != 23 )
ENABLE NONVALIDATE;
The NONVALIDATE clause will ensure that the check constraint does not attempt to validate existing data, though it will validate all future data.
Hope this helps.
IF (:new.acct_id = 2 and :new.cust_id = 23) THEN
must be OR, not and.
While using conditional checks you don't need to use colons (:). This will always cause errors.
Note: Exclude the colon only in cases where condition checking is performed.

How to merge rows + retrieve new and existing keys

In an Oracle table (e.g. MYTABLE, with a numeric sequenced field as primary key), I have to insert several thousand of rows, but some of them are supposed to already exist in the table.
Naturally, I should try to use MERGE but I need, as well, to retrieve all created (when inserting) and existing (when updating) primary keys.
As well, it should be as fast as possible.
Is the following attempt (pseudo code) the only way to go? Thanks.
keys_list = empty array
for each row to merge
do query 'SELECT PK_MYTABLE FROM MYTABLE WHERE PK_MYTABLE = '+row.pk_mytable
==> retrieve key
if found then:
add key to keys_list
else:
do query 'INSERT INTO MYTABLE (PK_MYTABLE, ...) VALUES (SEQ_MYTABLE.NEXTVAL, ...)'
do query 'SELECT SEQ_MYTABLE.CURRVAL FROM DUAL' ==> retrieve key
add key to keys_list
Add a MODIFICATION_DATE column to the table
Grab and save the sysdate.
When you merge update/insert the value of the sysdate as well.
When the merge is complete, select the rows where the MODIFICATION_DATE = SYSDATE and you
have the set you are interested in.
Why can't you use a MERGE statement for this? This is exactly what a MERGE is for. Here is a rough idea of how it would look...
merge into mytable mt
using
(
select key_field, value_field from sourcetable
) st
on
( mt.key_field = st.key_field )
when matched then update
set mt.value_field = st.value_field
when not matched then insert
( key_field, value_field )
values
( st.key_field, st.value_field )
;
Using a MERGE statement is fast because it is a single statement and the Oracle optimizer can utilize indexes and choose a better explain path than iterating through a cursor using PL/SQL.
If the keys are being generated from a sequence, then the normal way to get the key generated by that insert is to use the returning clause:
declare
v_insert_seq integer;
begin
insert into t1 (pk, c1)
values (myseq.nextval, 'value') returning pk into v_insert_seq;
end;
/
However, as best as I can tell, the merge statement doesn't support that returning feature.
Depending on the source of your new rows, there are different ways you could do this. If you are inserting one row at a time, then the approach above will work pretty well.
To detect the duplicate records, just catch the exceptions when you are inserting (when dup_val_on_index) and then handle them with updates.
If your source of rows is another table, you probably want to look at bulk inserts, and allowing Oracle to return you an array of new PK values. I tried this, but couldn't get it working, so perhaps it's not supported (or I'm missing something today - it gives a syntax error):
declare
type t_type is table of t1.pk%type;
v_insert_seqs t_type;
begin
insert into t1 (pk, c1)
select level newpk, 'value' c1value
from dual
connect by level <= 10 returning pk bulk collect into v_insert_seqs;
exception
when dup_val_on_index then
raise;
end;
/
The next best thing is to select the rows into arrays and then use bulk binds with the returning clause to capture the new PK IDs and also use Save Exceptions to catch all the rows that failed to inserted. Then you can process any of the failed inserted afterwards:
set serveroutput on
declare
type t_pk is table of t1.pk%type;
type t_c1 is table of t1.c1%type;
v_pks t_pk;
v_c1s t_c1;
v_new_pks t_pk;
ex_dml_errors EXCEPTION;
PRAGMA EXCEPTION_INIT(ex_dml_errors, -24381);
begin
-- get the batch of rows you want to insert
select level newpk, 'value' c1
bulk collect into v_pks, v_c1s
from dual connect by level <= 10;
-- bulk bind insert, saving exceptions and capturing the newly inserted
-- records
forall i in v_pks.first .. v_pks.last save exceptions
insert into t1 (pk, c1)
values (v_pks(i), v_c1s(i)) returning pk bulk collect into v_new_pks;
exception
-- Process the exceptions
when ex_dml_errors then
for i in 1..SQL%BULK_EXCEPTIONS.count loop
DBMS_OUTPUT.put_line('Error: ' || i ||
' Array Index: ' || SQL%BULK_EXCEPTIONS(i).error_index ||
' Message: ' || SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE));
end loop;
end;
/
If you are running Oracle 10 or better, you might be able to do much the same thing, for nearly free by issuing a commit before the merge to update the SCN, then after the merge,
use the ORA_ROWSCN to detect which rows have changed.