Trigger - After insert and delete example - sql

I have a requirement as a trigger should get fired when any row is inserted or deleted from table FAB which contains num as unique value. and depending upon that num value, another table should be update.
e.g.
FAB table
num code trs
10 A2393 80
20 B3445 780
Reel table
reelnr num use flag
340345 10 500 1
when num 10 from FAB table gets deleted(or any new num gets inserted), the trigger should get fired and should check the reel table which contains that num value and give the reelnr.
How to proceed with this?

you can Use Inserted & Deleted Table in SQL
These two tables are special kind of table that are only available inside the scope of the triggers.
If you tries to use these tables outside the scope of Triggers then you will get Error.
Inserted : These table is use to get track of any new records that are insert in the table.
Suppose there are Six rows are inserted in your table then these table will consist of all the six rows that are inserted.
Deleted : These table is used to track all the Deleted record from your tables.
Last delete rows will be tracked by these table.
For Insert :
CREATE TRIGGER TR_AUDIT_Insert
ON Reel_table
FOR INSERT
AS
BEGIN
INSERT INTO Reel_table (reelnr, num, use, flag)
SELECT
reelnr,
num,
use,
flag
FROM
INSERTED
END
For Delete :
CREATE TRIGGER TR_AUDIT_Delete
ON Product
FOR DELETED
AS
BEGIN
INSERT INTO Reel_table (reelnr, num, use, flag)
SELECT
reelnr,
num,
use,
flag
FROM
DELETED
END
Note :
I don't know from where these three reelnr, use flag values you are getting
So, Please modify this as per your need.
This is the format of the Triggers that normally we use.
You also can do this by using single trigger also
I dont know what is your exact requirement
If you want to achieve by only single Trigger then you can refer this link :
Refer

Related

Compound Trigger is Triggering Only Once

I have an ORDS enabled schema which accepts bulk JSON records and splits them one by one, and inserts them one by one into UEM table.
I've tried to create a trigger which fetches the last inserted row's id and use that value to insert into another table. The problem is, the trigger below only fetches and inserts the last inserted row's id, and it only does 1 insert.
To be more specific:
ORDS gets a bulk JSON payload which consists of 4 records.
POST handler starts a Procedure which splits these 4 records by line break, and immediately inserts these to CLOB columns of UEM table as 4 separate rows by using "connect by level". There is also the ID column which is automatically created and incremented.
In the parallel I also would like to get the ID of these rows and use it in another table insert. I've created the compound trigger below, but this trigger only retrieves the ID of the last record, and inserts only one row.
Why do you think it behaves like this? In the end, the procedure "inserted" 4 records.
CREATE OR REPLACE TRIGGER TEST_TRIGGER5
FOR INSERT ON UEM
COMPOUND TRIGGER
lastid NUMBER;
AFTER STATEMENT IS
BEGIN
SELECT MAX(ID) INTO lastid FROM UEM;
INSERT INTO SPRINT1 (tenantid, usersessionid, newuser, totalerrorcount, userid) VALUES ('deneme', 'testsessionid', 'yes', lastid, 'asdasfqwqwe');
END AFTER STATEMENT;
END TEST_TRIGGER5;
inserts these to CLOB columns of UEM table as 4 separate rows by using "connect by level".
You have 1 INSERT statement that is inserting 4 rows.
In the parallel I also would like to get the ID of these rows and use it in another table insert. I've created the compound trigger below, but this trigger only retrieves the ID of the last record, and inserts only one row.
Why do you think it behaves like this? In the end, the procedure "inserted" 4 records.
It may have inserted 4 ROWS but there was only 1 STATEMENT and you are using an AFTER STATEMENT trigger. If you want it to run for each row then you need to use a row-level trigger.
CREATE OR REPLACE TRIGGER TEST_TRIGGER5
AFTER INSERT ON UEM
FOR EACH ROW
BEGIN
INSERT INTO SPRINT1 (tenantid, usersessionid, newuser, totalerrorcount, userid)
VALUES ('deneme', 'testsessionid', 'yes', :NEW.id, 'asdasfqwqwe');
END TEST_TRIGGER5;
/
db<>fiddle here
Why? Because it is a statement level trigger. If you wanted it to fire for each row, you'd - obviously - use a row level trigger which has the
for each row
clause.

Edit inserted table sql

When insert I need edit a value if it is null. I create a trigger but I don't know how to edit inserted table.
ALTER TRIGGER [trigger1] on [dbo].[table]
instead of insert
as
declare #secuencia bigint, #ID_PERSONA VARCHAR;
select #secuencia = SECUENCIA from inserted
select #ID_PERSONA = ID_PERSONA from inserted
if #secuencia is null begin
set inserted.SECUENCIA = NEXT VALUE FOR SEQ_BIOINTEG --(Sequence)
end
i dont know how to edit inserted table.
You do not. That table is read only.
Note how your trigger also says:
instead of insert
There is no way to edit the inserted table.
What you do instead, is setting up an INSERT command for the original table, using the data from the inserted table to filter to the ROWS of inserted - mostly by a join.
Changing inserted makes no sense, logically - because triggers in SQL are one of two things:
INSTEAD OF - then there is no actual insert happening for inserted to start with. Instead of doing the insert, the trigger is called. As such, changing inserted - makes no sense.
AFTER - then the insert already happened (and you UPDATE the rows). As the trigger runs after the update, changing inserting makes no sense.
Note that I say ROWS - your trigger has one very basic error: it assumes inerted contains ONE row. It is a table - it is possible the changes come from an insert statement that inserts multiple rows (which is trivial, i.e. select into, or simply an insert with values for multiple rows). Handle those.
select #ID_PERSONA = ID_PERSONA from inserted
Makes NO sense - inserted is a table, so ID_PERSONA from inserted contains what value, if 2 rows are inserted? You must treat inserted like any other table.
Apart from all the varied issues with your trigger code, as mentioned by others, the easiest way to use a SEQUENCE value in a table is to just put it in a DEFAULT constraint:
ALTER TABLE dbo.[table]
ADD CONSTRAINT DF_table_seq
DEFAULT (NEXT VALUE FOR dbo.SEQ_BIOINTEG)
FOR SECUENCIA;

Trigger: How does the inserted table work? How to access its rows?

I have the following table
Data --Table name
ID -- Identity column
PCode -- Postal Code
I created the following trigger:
CREATE TRIGGER Trig
ON Data
FOR INSERT
AS
BEGIN
Select * from inserted
END
And inserted the following values
INSERT INTO Data VALUES (125)
INSERT INTO Data VALUES (126)
INSERT INTO Data VALUES (127)
It shows this:
But I was expecting something like this:
After the 1st insertion, the trigger is executed -> one row is shown in the inserted table.
After the 2nd insertion, the trigger is executed -> two rows are shown in the inserted table.
After the 3rd insertion, the trigger is executed -> three rows are shown in the inserted table.
According to msdn.microsoft all the rows inserted are in this table.
How can I access the inserted table so that I can see all the expected rows and not separately?
You can not. From the Use the inserted and deleted Tables article on microsoft.com, you can read:
The inserted table stores copies of the affected rows during INSERT and UPDATE statements.
That means that the inserted table will only contain rows for the current INSERT or UPDATE statement.
If you do want to see all rows for several such INSERT or UPDATE statements, you will have to store these rows in a table you created yourself.
There are 2 table available in a trigger, the inserted and the deleted. Each update on table XXX is actually a delete row X from XXX then an insert of row X in table XXX. So the inserted inside the trigger is a copy of what got inserted. You can do a lot with a trigger, but triggers are dangerous.
For example, on a performance gig, I found a huge SP being run by a trigger, we dropped it and the database came back online. Or another example, if you do a trigger wrong to audit logins, you can down the server.
As TT mentioned, if you want to see all the inserted records then you need to change your Trigger to something like this:
CREATE TRIGGER Trig
ON Data
FOR INSERT
AS
BEGIN
Select * into "tablename"
from
(Select * from inserted) Ins
END

SQL Server Unique Composite Key of Two Field With Second Field Auto-Increment

I have the following problem, I want to have Composite Primary Key like:
PRIMARY KEY (`base`, `id`);
for which when I insert a base the id to be auto-incremented based on the previous id for the same base
Example:
base id
A 1
A 2
B 1
C 1
Is there a way when I say:
INSERT INTO table(base) VALUES ('A')
to insert a new record with id 3 because that is the next id for base 'A'?
The resulting table should be:
base id
A 1
A 2
B 1
C 1
A 3
Is it possible to do it on the DB exactly since if done programmatically it could cause racing conditions.
EDIT
The base currently represents a company, the id represents invoice number. There should be auto-incrementing invoice numbers for each company but there could be cases where two companies have invoices with the same number. Users logged with a company should be able to sort, filter and search by those invoice numbers.
Ever since someone posted a similar question, I've been pondering this. The first problem is that DBs don't provide "partitionable" sequences (that would restart/remember based on different keys). The second is that the SEQUENCE objects that are provided are geared around fast access, and can't be rolled back (ie, you will get gaps). This essentially this rules out using a built-in utility... meaning we have to roll our own.
The first thing we're going to need is a table to store our sequence numbers. This can be fairly simple:
CREATE TABLE Invoice_Sequence (base CHAR(1) PRIMARY KEY CLUSTERED,
invoiceNumber INTEGER);
In reality the base column should be a foreign-key reference to whatever table/id defines the business(es)/entities you're issuing invoices for. In this table, you want entries to be unique per issued-entity.
Next, you want a stored proc that will take a key (base) and spit out the next number in the sequence (invoiceNumber). The set of keys necessary will vary (ie, some invoice numbers must contain the year or full date of issue), but the base form for this situation is as follows:
CREATE PROCEDURE Next_Invoice_Number #baseKey CHAR(1),
#invoiceNumber INTEGER OUTPUT
AS MERGE INTO Invoice_Sequence Stored
USING (VALUES (#baseKey)) Incoming(base)
ON Incoming.base = Stored.base
WHEN MATCHED THEN UPDATE SET Stored.invoiceNumber = Stored.invoiceNumber + 1
WHEN NOT MATCHED BY TARGET THEN INSERT (base) VALUES(#baseKey)
OUTPUT INSERTED.invoiceNumber ;;
Note that:
You must run this in a serialized transaction
The transaction must be the same one that's inserting into the destination (invoice) table.
That's right, you'll still get blocking per-business when issuing invoice numbers. You can't avoid this if invoice numbers must be sequential, with no gaps - until the row is actually committed, it might be rolled back, meaning that the invoice number wouldn't have been issued.
Now, since you don't want to have to remember to call the procedure for the entry, wrap it up in a trigger:
CREATE TRIGGER Populate_Invoice_Number ON Invoice INSTEAD OF INSERT
AS
DECLARE #invoiceNumber INTEGER
BEGIN
EXEC Next_Invoice_Number Inserted.base, #invoiceNumber OUTPUT
INSERT INTO Invoice (base, invoiceNumber)
VALUES (Inserted.base, #invoiceNumber)
END
(obviously, you have more columns, including others that should be auto-populated - you'll need to fill them in)
...which you can then use by simply saying:
INSERT INTO Invoice (base) VALUES('A');
So what have we done? Mostly, all this work was about shrinking the number of rows locked by a transaction. Until this INSERT is committed, there are only two rows locked:
The row in Invoice_Sequence maintaining the sequence number
The row in Invoice for the new invoice.
All other rows for a particular base are free - they can be updated or queried at will (deleting information out of this kind of system tends to make accountants nervous). You probably need to decide what should happen when queries would normally include the pending invoice...
you can use the trigger for before insert and assign the next value by taking the max(id) with "base" filter which is "A" in this case.
That will give you the max(id) value as 2 and than increment it by max(id)+1. now push the new value to the "id" field. before insert.
I think this may help you
MSSQL Triggers: http://msdn.microsoft.com/en-in/library/ms189799.aspx
Test Table
CREATE TABLE MyTable
( base CHAR(1),
id INT
)
GO
Trigger Definition
CREATE TRIGGER dbo.tr_Populate_ID
ON dbo.MyTable
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO MyTable (base,id)
SELECT i.base, ISNULL(MAX(mt.id),0) +1 AS NextValue
FROM inserted i left join MyTable mt
on i.base = mt.base
GROUP BY i.base
END
Test
Execute the following statement multiple times and you will see the next values available in that group will be assigned to ID.
INSERT INTO MyTable VALUES
('A'),
('B'),
('C')
GO
SELECT * FROM MyTable
GO

multithreading with the trigger

I have written a Trigger which is transferring a record from a table members_new to members_old. The Function of trigger is to insert a record into members_old on after insert in members_new. So suppose a record is getting inserted into a members_new like
nMmbID nMmbName nMmbAdd
1 Abhi Bangalore
This record will get inserted into members_old with the same data structure of the table
My trigger is like :
create trigger add_new_record
after
insert on members_new
for each row
INSERT INTO `test`.`members_old`
(
`nMmbID`,
`nMmbName`,
`nMmbAdd`
)
(
SELECT
`members_new`.`nMmbID`,
`members_new`.`nMmbName`,
`members_new`.`nMmbAdd`
FROM `test`.`members_new`
where nMmbID = (select max(nMmbID) from `test`.`members_new` // written to read the last record from the members_new and stop duplication on the members_old , also this will reduce the chances of any error . )
)
This trigger is working for now , but my confusion is that what will happen if a multiple insertion is happening at one instance of time.
Will it reduce the performance?
Will I face deadlock condition ever in any case as my members_old have FKs?
If any better solution for this situation is there, please give limelight on that
From the manual:
You can refer to columns in the subject table (the table associated with the trigger) by using the aliases OLD and NEW. OLD.col_name refers to a column of an existing row before it is updated or deleted. NEW.col_name refers to the column of a new row to be inserted or an existing row after it is updated.
create trigger add_new_record
after
insert on members_new
for each row
INSERT INTO `test`.`members_old`
SET
`nMmbID` = NEW.nMmbID,
`nMmbName` = NEW.nMmbName,
`nMmbAdd` = NEW.nMmbAdd;
And you will have no problem with deadlocks or whatever. Also it should be much faster, because you don't have to read the max value before (which is also unsecure and might lead to compromised data). Read about isolation levels and transactions if you're interested why...