We are using an ERP system which uses SQL Server. There is a function which creates a row 'A' in a specific table and populates it with data from another row 'B' of another table. For some reason the programmer thought, one would need only certain values of 'B' in 'A'. So only the values of some columns in 'B' are copied.
Now I want more columns to be copied than the program copies. The columns are there but they don't get copied.
The program offers a way to run a script before the SQL statement, which creates the row, is executed. So the problem here is, I don't know the id of the row which will be created. And even if I would, the row isn't created yet to alter it.
Is there a way in SQL Server to run a SQL script every time after a row is created in a specific table?
Thanks for the help.
Yes - those are called triggers.
You can write triggers that get fired after INSERT, UPDATE or DELETE - or they can be INSTEAD OF triggers, too - if you need to completely take control of an operation.
In your case, I believe an AFTER INSERT trigger should be just fine:
CREATE TRIGGER TrgCopyAdditionalColumns
ON dbo.TableA
AFTER INSERT
AS
-- the newly inserted row (there could be **multiple!**)
-- will be stored in the `Inserted` pseudo table, which has the
-- exact same structure as your "TableB" table - just pick out
-- the columns you need to insert into "TableA" from here
INSERT INTO dbo.TableA (Col1, Col2, ..., ColN)
SELECT
b.Col1, b.Col2, ..., b.ColN
FROM
dbo.TableB AS b
INNER JOIN
-- somehow, you need to connect your Table B's rows to the
-- newly inserted rows for Table A that are present in the
-- "Inserted" pseudo table, to get only those rows of data from
-- Table B that are relevant to the newly inserted Table A rows
Inserted i ON b.A_ID = i.ID
Related
In SQL server, when does a trigger get fired?
The problem is, I have a table where 45,000 records are going to be inserted.
And I want to copy all 45k records to other tables.
But I don't want the trigger to run on every insert, i.e 45000 times trigger.
My trigger is basically copying record from TableA to TableB.
Trigger:
Create trigger tri_1
on TableA
after insert
as
Begin
Insert into TableB (ID,Name,Others)
select ID,Name,Others from TableA
inner join inserted
on inserted.ID = TableA.ID
End
The above is just the template of my trigger.
Also, I have a question, the trigger mentioned above, how is it working? like firing for each row or after all insert is done?
In SQL Server, the trigger is fired when the insert statement is completed.
In some databases, the trigger is executed for each row inserted (in those databases for each row is often part of the syntax). By contrast, SQL Server keeps track of the changed rows, which is why they are stored in table-like structures, inserted and deleted -- and it is a mistake to assume that these contain only one row.
I need to INSERT a row in table_A depending on the information in a row in table_B.
Is it possible to do this in an isolated way where the SELECT retrieval of the row from table B is locked until either the new row is INSERTed into table_A or the INSERT is skipped due to the information in table_B's row?
It's really not clear what you are trying to say , i think your problem is solved by using a trigger .
check this site for know more about trigger
http://www.codeproject.com/Articles/25600/Triggers-SQL-Server
You can do this:
INSERT INTO A (columns) select columns from table B where condition;
Columns retrieved from the query must match the queries defined in the table A.
PostgreSQL supports MVCC, custom locking can be done but it is not recomended.
So there's this table of just about 40,000 rows I am looking to update. Colleague said it's best to incrementally update the table instead of complete delete and load.
So I've tried hashing out the design and logic of a script to do this, but my inexperience is getting to me. I just don't know what's efficient and unneeded to incrementally update a table.
Currently, the warehouse looks like this: data comes from source into a table (let's call this T1) in Teradata. Then it's sent into another table (let's call this T2) in Teradata with some added fields such as timestamp. Lastly, a view is built on that last table for security reasons.
So with that laid out, I was thinking of creating a temp/volatile table with data from T1. This would have all the data up to the time the script is run with new records. Then, go through the entire table seeing if the ID (primary index) already exists in T2, and if not, add it to another temp table. Then somehow combine the second temp table with T2 and override T2 and build a view on top of that.
Does this make any sense?
There's also the possibility of records being updated. So they would already exist in T2, but have updated data in a new version of T1. I think comparing the values of all the columns from T1 to T2 would be highly inefficient, but can't think of another way to do this
A 40,000 row delete and insert should be pretty painless for any modern database. Ditto for updates.
The real reason for doing and incremental delete/update/insert is so you can log the changes and timestamp rows in the permanent table with the date/time of nsertion and/or last update. The usual technique goes something like this:
remove rows from the permanent table that don't exist in the temp table
update rows that exist in both tables
insert rows that exist in the temp table, but don't exist in the permanent table.
Looking at the Teradata docs, that would be something like this (no warranties about this being syntactically correct, since I don't have a Teradata instance to play with):
delete permanent p
where not exists ( select *
from temp t
where t.id = p.id
)
update p
from permanent p ,
temp t
set ...
where t.id = p.id
insert permanent
select ...
from temp t
where not exists ( select *
from permanent p
where p.id = t.id
)
One might note that the deletes might get a little hairy if there are dependent foreign key constraints involved.
One might also note that on the update, the where clause might get a tad...complicated if you want to check for actual changes to column values: not much point in updating a row if nothing has changed.
There's a Teradata MERGE command that you might find useful, check this post:
https://forums.teradata.com/forum/database/merge-syntax-simple-version
merge into merge_tmp as t using (select 1 as a,'stf' as b,'uuj' as c) as s
on t.a = s.a
when matched then update set c = s.c
when not matched then insert values (s.a,s.b,s.c);
If you need to match on more columns simple put an and in the on statement.
Edit: If you want to use MERGE you might also need to use a delete statement like the one in nicholas' post.
I am trying to execute a query within a SQL trigger.
I have 4 tables A, B, C, D. Table A is a lookup list and contains roughly 1400 rows of data. Table B are values being input through an HMI with a timestamp. Table C is the table where my values are intended to go. Table D is a list of multipliers to use to multiply values from table A to table B (I am only using one multiplier from table D at the moment).
When a user inputs data into table B, that should trigger the procedure to get the values that were inserted (including the itemnumber) and relate the itemnumber to table A and use table D to multiply a few things together to send values to Table C. If I only input 3 rows of data in table B for example, I should only get three rows of data in table C. I am merely using table A to match the item number and get some data. But for some reason I am inserting way more records than intended, over 1600 rows.
Table D multipliers have a timestamp that does not match or have any correlation with any other table. So I am using a timestamp and selecting the multipliers that are closest to the timestamp from table B (some multipliers will change throughout time and I need a historical multiplier to correctly multiply the right things together)
Your help is most appreciated. Thank you.
Insert into TableC( ItemNumber, Cases, [Description], [Type], Wic, Elc, TotalElc, LbsPerCase, TotalLbs, PeopleRequired, ScheduleHours, Rated, Capacity, [TimeStamp])
Select
b.ItemNumber, b.CaseCount, a.ItemDescription, a.DivisionCode, a.workcenter,
a.LaborPercase as ELC, b.CaseCount * a.LaborPerCase * d.IpCg,
a.LbsPerCase, a.LaborPerCase * b.CaseCount as TotalLbs,
a.PersonReqd, b.Schedulehours, a.PoundRating,
b.ScheduleHours * a.PoundRating as Capactity, b.shift, GETDATE()
from
TableA a, TableB b, TableD
Where
a.itemnumber = b.itemnumber
and d.IpCG < b.TimeStamp
and b.CasesCount > 0
You do not reference the inserted or deleted tables that are available only in the trigger, so of course you are returning more records tha you need in your query.
When first writing a trigger, what I do is create a temp table called #inserted (and/or #deleted) and populate it with several records. It should match the design of the table that the trigger will be on. It is important to make your temp table have several input records that might meet the various criteria that affect your query (so in your caseyou want some where the case count would be 0 and some where it would not for instance) and that would be typical of data inserted into the table or updated init. SQL server triggers operate on sets of data, so this also ensures that your trigger can properly handle multiple record uiinserts or updates. A properly written trigger would have test cases you need to test to make sure everything happens correctly, your #inserted table should include records that meet all those test cases.
Then write the query in a transaction (and roll it back while you are testing) joining to #inserted. If you are doing an insert with a select, only write the select part until you get that right, then add the insert. For testing, write a select from the table you are inserting to in order to see the data you inserted before you rollback.
Once you get everything working, change the #inserted references to inserted, remove any testing code and of course the rollback (possibly the whole transaction depednig on what you are doing.) and add the drop and create trigger part of the code. Now you can test you trigger as a trigger, but you are in good shape becasue you know that it is likely to work from your earlier testing.
I have a trigger to copy over the data from Table A to table B when table A is changed
The trigger is like this :
ALTER TRIGGER ATrigger
ON A AFTER INSERT, DELETE, UPDATE
AS
BEGIN
SET NOCOUNT ON;
DELETE FROM B WHERE id IN (SELECT id FROM deleted)
INSERT INTO B(Id, col1,col2) (SELECT i.Id, i.col1, i.col2 FROM inserted i)
END
But i see not all the data inserted in A are copied to B, the data copied seems very random
I was searching around, found it might caused by multi-insert, someone is suggesting using cusor, but i think for mine, it should be ok to insert or delete from the inserted, deleted table using this two sql.
Please advise, thanks!
I'm not certain this is your problem but your trigger has 2 "gotchas". First on an insert the deleted table will have no rows in it so no deletes will be done. Second is the reverse and potentially your problem. On a delete the inserted table has no rows. So all of the IDs are going to be deleted from table B but not re-inserted. On top of this if ID is not a unique key for table A then when you insert a second copy of it you will be deleting all of your history in table B and only adding the "new" history.
If you can provide more information on the structure of the 2 tables and the purpose of the trigger, not to mention any patterns on the rows being inserted or not being inserted as the case may be we can be of more help.