checking triggers in sql file - sql

I have sql file which contains triggers. I have some 30-40 triggers in that file.
Each trigger contains the insert statement into update_delete table.
Sometimes its
insert into update_delete(id,value,name) values (:old.id,:old.value,:old.name);
or
insert into update_delete(id,value,name)values(:old.id,:old.value,null);
or
insert into update_delete(id,name)values(:old.id,:old.name);
I want to write a script which scans all the triggers in sql file and check the if name field in update_delete table is inserted with old.name or null.
Please do suggest how do I proceed with this.

Related

How to capture what DML statement has modified the data in the DML trigger?

I have created a DML trigger to capture the data for a table that needs to have track all the data modifications.
I am able to capture the SPID, Hostname, Appname and UserName along with the records changed.
Now, I wanted to capture the DML statement that made the modifications to the records.
For example, I can insert into the table using
Normal Insert statement with values
Insert into table using selecting the records
Using a Merge statement
Similarly, we can have multiple ways to do the other DML operations also like Update and Delete.
I wanted to capture the statement that the user has executed to modify the records.
Is there a way I can get this functionality?

How to identify by a trigger if any insert update delete function operation performed in any table of SQL Server

I have a database in which I want to make a centralized trigger; if any database table is hit with an insert, update or delete operation, then this trigger should be executed and column values that are inserted, updated or deleted from that operation should be saved in my own table by the help of trigger.
I have seen fn_dblog function in SQL Server but it does not return column values which are affected. I need to save that column values also which are going to be inserted or updated.

how to insert large number of rows in oracle?

Can anyone tell me how to insert large number of rows in Oracle ?
Using insert statement we can insert data into rows of table.
insert into example values(1,'name','address');
Suppose I want to insert 100,000 rows , do I need to insert one by one by following above procedure? Or is there any other way to insert a large number of rows at a time? Can any one advise me with an example please.
Note: here i'm not asking copying data from another table.. just consider we have an XL sheet consist of 1,00,000 rows,then how we can insert them into a particular table..
Thanks,
Sai.
If you are loading using individual insert statements from a script, using SQL*Plus, say, then one handy speed-up is to bunch sets of inserts into anonymous PL/SQL blocks ...
begin
insert into example values(1,'name','address');
insert into example values(1,'name','address');
insert into example values(1,'name','address');
...
end;
/
begin
insert into example values(1,'name','address');
insert into example values(1,'name','address');
insert into example values(1,'name','address');
...
end;
/
This reduces the client/server chatter enormously.
An original file can often be easily modified with unix scripts or macro in a decent text editor.
Not necessarily what you'd want to embed into a production process but handy for the occasional job.
Use sqlldr with the direct path option.
I suspect you have it in a CSV file.
Create directory object
Create external table. You can query external table the same way as regular table the difference is that the data in the table is from a file located in a directory object.
http://www.oracle-base.com/articles/9i/external-tables-9i.php

Reading values inserted by trigger in a different table

I'm having the following issue: I have a trigger on a table A, whose purpose is to compute some values and insert them in a completely different table B.
The problem is that, somewhere in that logic, there is a loop that requires the values that would have been freshly inserted into table B.
I've noticed that SQL Server executes all the INSERT commands at once, after exiting the trigger.
ALTER TRIGGER [dbo].[InsertTrade]
ON [dbo].[Blotter]
AFTER INSERT
AS
BEGIN
/* compute #Variables */
INSERT INTO [dbo].[CompletelyUnrelatedTableWithoutTriggersOnIt]
VALUES #Variables
Is there any way of COMMMIT-ing that INSERT and being able to read those values while still in the trigger?
Thanks,
D.
First of all, be very careful with how you are constructing your trigger. If you're using INSERT...VALUES() in a trigger, it's a good indication that you're assuming there will only ever be one record in the INSERTED table. Never make that assumption. Instead your logic should be INSERT...SELECT <computed cols> FROM INSERTED
Second, if you want to get out the values you just put in, you could use the OUTPUT clause but I'm not sure that's what you mean (it's not entirely clear what you want to do with the values) then you will have access to the final values that were inserted "while still in the trigger"
If that's not what you want, perhaps it would be better to encapsulate all this functionality into a proc.

Ruby File that Will Log All SQL INSERTs and SQL DELETEs (and only those commands)

I am working with a PostgreSQL database. I have written a .rb file I am using to manipulate data in the database. I want to be able to log all the SQL INSERTs and DELETEs elicited by this file. How do I go about that?
At the start of your script, create the needed temporary tables, and adds two triggers, one on insert, one on delete, and have them fire for each row accordingly. it also works with rules:
create temporary table foo_log_ins (like foo);
create rule log_foo_ins as
on insert to foo
do also
insert into foo_log select new.*;
create temporary table foo_log_del (like foo);
create rule log_foo_del as
on delete to foo
do also
insert into foo_log_del select old.*;