Trigger a Stored Procedure on Insert or Update Triggers - sql

I have two Triggers for a table that share identical functionality. A record is inserted into or updated in said table, and some of its fields are Inserted into another table, or elsewhere into the same table. If it is a new record, the Insert Trigger triggers. If it is an already-existing record, the Update Trigger triggers.
It isn't rocket science.
However, I'm entertaining the idea of simplifying these Triggers by having each of them call a Stored Procedure that will replicate the above functionality. I'm going this route because a separate table I'm working with doesn't allow an After Insert Trigger to be used in this manner with the code it contains, and I'd like to find a workaround. I'm not sure if it'll work or not, but I'd like to give it a shot.
I have no idea how to go about it, though. I utilize variables that are populated with data from tables Joined to the triggering one, and N.field or O.field throughout the Triggers. Is this possible if the Trigger calls a Proc? If it's called For Each Row, are the New and Old fields available to be used within the Proc? If so, how? Input and Output parameters with regard to Stored Procedures are new to me. I don't even know if what I'm asking makes as much sense written out as it does in my head.
For example, we'll say I'm working with a table called Cars.
ID |VIN |CATEGORY
----------------------------------------------
1 A1234 A
2 A1235 A
3 B1234 B
If a record is Inserted into Cars, the first character of CARS.VIN should be Inserted into CARS.CATEGORY. If a record is Updated, CARS.CATEGORY should be, as well, referencing N.VIN. I have a Trigger for each of these and they work fine.
The minutiae of the 'why' aside, is it possible to contain the functionality of these Triggers into a Proc that either can call? How would I go about the initial steps of creation for said Proc?

Instead of using a trigger, consider defining CATEGORY as a generated column, for example
CREATE TABLE cars (
id INT GENERATED ALWAYS AS IDENTITY,
vin VARCHAR(17),
category CHAR(1) GENERATED ALWAYS AS ( LEFT( vin, 1 ) )
)

Related

Informix select trigger to update column

Is it possible to increase the value of a number in a column with a trigger every time it gets selected? We have special tables where we store the new id and when we update it in the app, it tends to get conflicts before the update happens, even when it all takes less than a second. So I was wondering if it is not possible to set database to increase value after every select on that column? Do not ask me why we do not use autoincrement for ids because I do not know.
Informix provides the SERIAL and BIGSERIAL types (and also SERIAL8, but don't use that) which provide autoincrement support. It also provides SEQUENCES with more sophisticated autoincrements. You should aim to use one of those.
Trying to use a SELECT trigger to update the table being selected from is, at best, fraught with problems about transactions and the like (problems which both the types and sequences carefully avoid).
If your design team needs help making effective use of these, ask a new question outlining what you want to achieve.
Normally, the correct way to proceed is to make the ID column in each table that defines 'something' (the Orders table, the Customer table, …) into a SERIAL column and either not insert a value into the ID column or insert 0 into it. The generated value can be retrieved and used when creating auxilliary information — order items, etc.
Note that you could think about using:
CREATE TABLE xyz_sequence
(
xyz SERIAL NOT NULL PRIMARY KEY
);
and using:
INSERT INTO xyz_sequence VALUES(0);
and then retrieving the inserted value — in Informix ESQL/C, you'd use sqlca.sqlerrd[1], in other languages, other techniques. You can also delete the newly inserted record, or even all the records in the table. You can afford to ignore errors from the DELETE statement; sooner or later, the rows will be deleted. The next value inserted will continue where the prior ones left off.
In a stored procedure, you'd use DBINFO('sqlca.sqlerrd1') to get the inserted value. You'd use DBINFO('bigserial') to get the value if you use a BIGSERIAL type.
I found out possible answer in this question update with return value instead of doing it with select it seems better to return value directly from update as update use locks it should be more safer even when you use multithreading application. But these are just my assumptions. Hopefully it will help someone.

Update records in database automatically

Let me describe my scenario here.
I am having a table with multiple records, where the name is the same, as it's gonna be records for the same person updated on daily basis.
Right now, I am trying to find out the easiest way to update all the names accordingly.
Name is going to be updated (TestName -> RealName)
I want this change to be applied to all the records with the same name, in this case, "TestName"
I can do a single query, but I am trying to find if there's an automatic way to do this correctly.
Been trying using a triggers, but in most cases, I am ending with an infinite loop, as I am trying to update the table, where a trigger is actually bound to, so it's invoking another update and so on.
I don't need an exact solution, just give me some ropes about how it can be achieved, please.
The problem may be simply resolved by using the function pg_trigger_depth() in the trigger, e.g.:
create trigger before_update_on_my_table
before update on my_table
for each row
when (pg_trigger_depth() = 0) -- this prevents recursion
execute procedure before_update_on_my_table();
However, it seems that the table is poorly designed. It should not contain names. Create a table with names (say user_name) and in the old table store a reference to the new one, e.g.:
create table user_name(id serial primary key, name text);
create table my_table(id serial primary key, user_id int references user_name(id));
You can use event triggers in postgresql https://www.postgresql.org/docs/9.3/sql-createeventtrigger.html

When inserting to multiple tables, what is the best approach for this matter?

I'm not sure if there is a debate about this.
When I read books, I'm advised to use triggers to follow up inserts into other tables. On the other hand, my mentor uses stored procedures to insert into the other tables.
My question here, which is the best method? Or is there a better way?
If You want of Insert data on table you can do it only by stored procedure not by triggers because Triggers can't accept parameters or anything at runtime.
you can call a stored procedure from inside another stored procedure but you can't directly call another trigger within a trigger.
So I think you should use Stored procedure rather then triggers
For Details you can visit to following link.
http://www.codeproject.com/Tips/624566/Differences-between-a-Stored-Procedure-and-a-Trigg
Please Reply for my answer.
You don't need either. Start a transaction, make all your inserts (parent table first, child tables afterwards), end the transaction with COMMIT, and you are done.
Use stored procedures if you want to bundle this and ensure some kind of consistency (such that there is always at least one child table for a parent table for instance). But this can get complicated. Say you want to insert a new product with all its colors, sizes, suppliers and selling markets. Certain colors/sizes will be supplied by one or more suppliers and not the others, same with selling markets. To show these relations we usually use tables, but now you'll have to put these into parameters somehow in order to get them inserted into tables. I was told that some people have all their database writes in procedures. This is probably possible but has its limits.
As to auto-inserts by triggers: You can use them to log data, so as to get a history or the like, but you don't use them to insert business data. Think of an order with its positions: You insert the order header (order date, client number, ...), but how shall the trigger know which items were ordered? Or vice versa: You insert an order position (item number, price) and want the header be created automatically, but how shall the trigger know the client number? Triggers are not appropriate for such things.
As mentioned: Usually you'd just work with plain SQL in transactions and that's it.

PL/SQL: Is it possible to access the :NEW / :OLD with a variable holding the column name?

first of all thanks for looking into my question. Well now to my problem. I was making a trigger listening for UPDATES writing in a logfile which columns had been changed and writing the new / old value of the column.
For example there is a table with these columns:
[ID] [NAME] [AGE]
1 Me 18
If I would now call:
UPDATE TABLE VALUES(1, 'Not Me', 19);
It was supposed to log:
[NAME], Me, Not Me
[AGE], 18, 19
First of all i wanted to get the column name dynamically so my trigger would work dynamically. That worked and i ended up having the column name in a variable like:
x.column_name
I know how to use :OLD / :NEW however I couldn't figure a way to get
e.g. :OLD.Id
:OLD => x.column_name // where x.column_name would hold Id
I am not quite sure if this is simply not possible or I'm just missing something important on SQL.
Thanks in Advance for any answers ;)
You cannot dynamically reference columns in a record (or a pseudo-record like :new or :old). References to column names need to be static.
You could, however, dynamically generate the trigger (though this would mean that you would need to regenerate the trigger every time you add or remove columns). There are a variety of approaches to this, here is one AskTom example that uses a SQL*Plus script to generate the trigger dynamically.
Taking a step back, though, I'm always pretty dubious about storing audit data this way in the first place. When every altered column is stored as a separate row in the audit table, the audit table gets quite large and running queries to see the prior state of a row requires joining the table to itself once for every column in the table. That generally gets very slow very quickly.
I was able to log changes into a log table using the solution given on the OTN discussion forum.
To get it to work, I created the supporting tables as follows:
CREATE TABLE T_HISTORIE (hi_betreffende_tabelle varchar2(100),hi_betreffendes_feld varchar2(100),hi_wert_alt varchar2(100),hi_wert_neu varchar2(100));
CREATE TABLE T_HISTORIENSTEUERUNG (HS_TABELLE VARCHAR2(100),HS_FELD VARCHAR2(100));
T_HISTORIENSTEUERUNG is a table where you enter which tables and columns you want to log. For example, if you are interested in logging the changes to only the NAME column, then you would make an entry as follows:
INSERT INTO T_HISTORIENSTEUERUNG VALUES ('TABLE', 'NAME');
T_HISTORIE is the log table. It contains the table name, column name, the old value, and new value for every update that changes the value of a column, which has been configured to be logged (using the T_HISTORIENSTEUERUNG table). You could possibly add the time when the update was made, too. You would need to modify the PL/SQL package as well.
All subsequent updates to the TABLE would be logged.

Creating trigger to move a row to an archive table

I'm new to triggers in PostgreSQL and I don't know if what I want to do is a trigger job, but was suggestion of my teacher.
I have the following link table:
id | link | visited | filtered | broken | visiting
The last four attributes are boolean and default to false. Currently I'm setting it to true on an UPDATE and there is no more use for it (the row).
The idea of new design is let the link table only with id and link attributes, and the others attributes to an archive tables (visitedLinksTable, brokenLinksTable, filteredLinksTable and visitingTable).
Is trigger util for this? They said to move it to another table (insert into some archive table and delete from the link table)
Something along these lines should work. The particulars will depend on your specific schema, etc.
CREATE FUNCTION update_function() RETURNS TRIGGER AS $$
BEGIN
IF NEW.visited IS TRUE
OR NEW.filtered IS TRUE
OR NEW.broken IS TRUE
OR new.visiting IS TRUE THEN
INSERT INTO archive_table (id,link,visited,filtered,broken,visiting)
VALUES NEW.id,NEW.link,NEW.visited,
NEW.filtered,NEW.broken,NEW.visiting;
DELETE FROM table WHERE id=NEW.id;
RETURN NULL;
END IF;
RETURN NEW
END
$$ LANGUAGE plpgsql;
CREATE TRIGGER update_trigger
BEFORE UPDATE ON table
FOR EACH ROW EXECUTE PROCEDURE
update_function();
A trigger wouldn't really work for this. Presumably you'd need some way to determine which table (visited, broken, filtered, visiting) the link should be moved to when you delete it but there's no way to tell the trigger where the link should go.
You could use a couple non-trigger functions to encapsulate a process like this:
Link goes into the link table.
Move the link to the "visiting" table.
Depending on the result of trying the link, move it it from "visiting" to the "visited", "broken", or "filtered" tables.
You could use a stored procedure to take care of each of the transitions but I don't know if you'd gain anything over manual INSERT ... SELECT and DELETE statements.
However, if you really have a thing for triggers (and hey, who doesn't like triggers?) then you could use your original six column table, add a last-accessed timestamp, and periodically do some sort of clean-up:
delete from link_table
where last_accessed < some_time
and (visited = 't' or filtered = 't' or broken = 't')
Then you could use a DELETE trigger to move the link to one of your archive tables based on the boolean columns.
You could use views and view triggers on recent PostgreSQL, I suppose. In general, I think it is best to encapsulate your storage logic inside your data logic anyway, and views are a useful way to do this.
Another way would be to have access to/from the table be through a function instead. That way you can maintain a consistent API while changing storage logic as necessary. This is the approach I usually use, but it has a few different tradeoffs compared to the view approach:
The view/trigger approach works better with ORMs, while the procedural approach dispenses with the need for an ORM altogether.
There are different maintenance issues that arise with each approach. Being aware of them is key to managing them.