Is it wiser to use a function in between First and Next Insertions based on Select? - sql

PROCEDURE add_values
AS BEGIN
INSERT INTO TableA
SELECT id, name
FROM TableC ("This selection will return multiple records")
While it inserts in TableA i would like insert into another table(TableB) for that particular record which got inserted in tableA
Note:The columns in TableA and TableB are different , is it wise to call a function before inserting into TableB as i would like to perform certain gets and sets based on the id inserted in tableA.

If you want to insert a set of rows into two tables, you'd have to store it in a temporary table first and then do the two INSERT statement from there
INSERT INTO #TempTable
SELECT id, name
FROM TableC ("This selection will return multiple records")
INSERT INTO TableA
SELECT (fieldlist) FROM #TempTable
INSERT INTO TableB
SELECT (fieldlist) FROM #TempTable

Apart from Marc_S answer, one more way is
First insert the needed records into Table A from Table C. Then pump the needed records from Table A to Table B
Though many ways has been suggested by many peoples in your question that u asked just 3 hrs ago How to Insert Records based on the Previous Insert

Related

Could insert statements match data from another table

I am trying to do an insert on a table from another table with matched values and IDs.
lets say on table1 there are values of names and IDS. Like John and 55.
I am trying to only insert the 55 into table2 since John is already on table2 but just missing his ID.
I know I can do update statement to update the value for John to 55, but my tables have over 3000 values, and it will be hard to do one at a time:
Anyway I can write a query to enter a value into the other table as long as the names match together?
what I tried so far:
insert into desired_table (id,version,source_id,description,r_id)
SELECT HI_SEQUENCE.nextval,'0', select min (id)
from table
where name in (select name from table2 where table2_name is not null),
table2_name,
table2.r_id from table2 where name is not null;
Issue with this statement is it inserts multiple values, but it only inserts it into where the min ID is.
Anyway I can adjust this and have it pull more than one ID?
Use Merge statement (https://learn.microsoft.com/en-us/sql/t-sql/statements/merge-transact-sql?view=sql-server-ver15)
Merge into Table1 d
Using Table 2 s
on d.name=s.name
when matching then update
age=s.age
when not matching then insert
(col1, col2)
values (s.col1, s.col2);
You might want a trigger to automate the above task.
Create Trigger sample after insert on
Table1 for each row
Begin
Update table2 set table2.age = :NEW.AGE where
table2.id=:NEW.Id
END;
Got this working by generating insert statements and running them with insert all

How to move SQL records from one table to another inside a trigger

I have a sql table (table A) with a column called 'Number'. Inside a trigger (I have an AFTER INSERT,UPDATE trigger) I want to move all records that have the same Number to a different table (say table B)
So if its an insert I want to move all records that have that Number to table B so only the new record exists in the original table (table A).
If its an update I want to make a copy of the record being updated to table B (with old values).
so table B is a history table that holds all previous records and table A should only have one entry per Number.
would be nice to put in the existing AFTER INSERT/UPDATE but if I need another trigger that's fine.
Thanks for any help.
This turned out not as complicated as I thought...
(#itemId set from cursor)
SELECT #tempNumber = [Number] FROM tableA WHERE Id = #itemId;
if (NOT EXISTS(SELECT * FROM deleted)) --is insert
BEGIN
INSERT INTO tableB SELECT *, GETDATE() AS CreatedDate FROM tableA WHERE Number=#tempNumber AND Id != #itemId
DELETE FROM tableA WHERE Number=#tempNumber AND Id != #itemId
END
i decided not to create an entry on update but if needed you I believe you would just do an ELSE and SELECT from deleted and then INSERT.
Hope this helps someone!

Update trigger select fields from same row

i want an update trigger an a specific field. if that field value is changed i want to insert into a different table selecting all values of the row where update was made even though it was just for one field .
example
id--------value1--------value2
1-----------abc ----------efg
if value1 is updated to hij, i want to select id(1), value1(hij) and value2(efg) and insert into a different table.
i cannot do inserted.Id or inserted.value2 since both fields are not updated.
NOTE: please note only 1 field is updated, other field values are the same before and after, in my question i have just used an example, but in real life a record will be inserted and i am expected to insert the same values onto a different table. but upon insert the record wont be approved until later when approved field value is changed thats when i am expected the bring the values from other fields to different table.
In your UPDATE trigger, you have access to the Deleted and Inserted pseudo tables which contain the old values (before the UPDATE) and the new ones after the UPDATE.
So you should be able to write something like this:
CREATE TRIGGER trg_Updated
ON dbo.YourTableName
FOR UPDATE
AS
INSERT INTO dbo.ThisOtherTableOfYours(Id, Value1, Value2)
SELECT
i.Id, i.Value1, i.Value2
FROM
Inserted i
INNER JOIN
Deleted d ON i.Id = d.Id
WHERE
i.Value1 <> d.Value1
The SELECT basically joins the two pseudo tables with the old and new values, and selects those rows which have a difference in the Value1 column.
From those columns, the new values after the update are being inserted into your other table. And the Inserted table does contain ALL columns (with their new values) from your table - not just those that have been actually updated - ALL of them!
You should use a simple trigger:
create or replace trigger NAME on TABLENAME after update as
if :new.value1 = 'hij'{
insert into OTHERTABLE (id, value1, value2) values (:old.id, :old.value1, :old.value2);
}

How to fix this stored procedure problem

I have 2 tables. The following are just a stripped down version of these tables.
TableA
Id <pk> incrementing
Name varchar(50)
TableB
TableAId <pk> non incrementing
Name varchar(50)
Now these tables have a relationship to each other.
Scenario
User 1 comes to my site and does some actions(in this case adds rows to Table A). So I use a SqlBulkCopy all this data in Table A.
However I need to add the data also to Table B but I don't know the newly created Id's from Table A as SQLBulkCopy won't return these.
So I am thinking of having a stored procedure that finds all the id's that don't exist in Table B and then insert them in.
INSERT INTO TableB (TableAId , Name)
SELECT Id,Name FROM TableA as tableA
WHERE not exists( ...)
However this comes with a problem. A user at any time can delete something from TableB so if a user deletes say a row and then another user comes around or even the same user comes around and does something to Table A my stored procedure will bring back that deleted row in Table B. Since it will still exist in Table A but not Table B and thus satisfy the stored procedure condition.
So is there a better way of dealing with two tables that need to be updated when using bulk insert?
SQLBulkCopy complicates this so I'd consider using a staging table and an OUTPUT clause
Example, in a mixture of client pseudo code and SQL
create SQLConnection
Create #temptable
Bulkcopy to #temptable
Call proc on same SQLConnection
proc:
INSERT tableA (..)
OUTPUT INSERTED.key, .. INTO TableB
SELECT .. FROM #temptable
close connection
Notes:
temptable will be local to the connection and be isolated
the writes to A and B will be atomic
overlapping or later writes don't care about what happens later to A and B
emphasising the last point, A and B will only ever be populated from the set of rows in #temptable
Alternative:
Add another column to A and B called sessionid and use that to identify row batches.
One option would be to use SQL Servers output clause:
INSERT YourTable (name)
OUTPUT INSERTED.*
VALUES ('NewName')
This will return the id, name of the inserted rows to the client, so you can use them in the insert operation for the second table.
Just as an alternative solution you could use database triggers to update the second table.

"Merging" two tables in T-SQL - replacing or preserving duplicate IDs

I have a web application that uses a fairly large table (millions of rows, about 30 columns). Let's call that TableA. Among the 30 columns, this table has a primary key named "id", and another column named "campaignID".
As part of the application, users are able to upload new sets of data pertaining to new "campaigns".
These data sets have the same structure as TableA, but typically only about 10,000-20,000 rows.
Every row in a new data set will have a unique "id", but they'll all share the same campaignID. In other words, the user is loading the complete data for a new "campaign", so all 10,000 rows have the same "campaignID".
Usually, users are uploading data for a NEW campaign, so there are no rows in TableA with the same campaignID. Since the "id" is unique to each campaign, the id of every row of new data will be unique in TableA.
However, in the rare case where a user tries to load a new set of rows for a "campaign" that's already in the database, the requirement was to remove all the old rows for that campaign from TableA first, and then insert the new rows from the new data set.
So, my stored procedure was simple:
BULK INSERT the new data into a temporary table (#tableB)
Delete any existing rows in TableA with the same campaignID
INSERT INTO Table A ([columns]) SELECT [columns] from #TableB
Drop #TableB
This worked just fine.
But the new requirement is to give users 3 options when they upload new data for handling "duplicates" - instances where the user is uploading data for a campaign that's already in TableA.
Remove ALL data in TableA with the same campaignID, then insert all the new data from #TableB. (This is the old behavior. With this option, they'll never be duplicates.)
If a row in #TableB has the same id as a row in TableA, then update that row in TableA with the row from #TableB (Effectively, this is "replacing" the old data with the new data)
If a row in #TableB has the same id as a row in TableA, then ignore that row in #TableB (Essentially, this is preserving the original data, and ignoring the new data).
A user doesn't get to choose this on a row-by-row basis. She chooses how the data will be merged, and this logic is applied to the entire data set.
In a similar application I worked on that used MySQL, I used the "LOAD DATA INFILE" function, with the "REPLACE" or "IGNORE" option. But I don't know how to do this with SQL Server/T-SQL.
Any solution needs to be efficient enough to handle the fact that TableA has millions of rows, and #TableB (the new data set) may have 10k-20k rows.
I googled for something like a "Merge" command (something that seems to be supported for SQL Server 2008), but I only have access to SQL Server 2005.
In rough pseudocode, I need something like this:
If user selects option 1:
[I'm all set here - I have this working]
If user selects option 2 (replace):
merge into TableA as Target
using #TableB as Source
on TableA.id=#TableB.id
when matched then
update row in TableA with row from #TableB
when not matched then
insert row from #TableB into TableA
If user selects option 3 (preserve):
merge into TableA as Target
using #TableB as Source
on TableA.id=#TableB.id
when matched then
do nothing
when not matched then
insert row from #TableB into TableA
How about this?
option 2:
begin tran;
delete from tablea where exists (select 1 from tableb where tablea.id=tableb.id);
insert into tablea select * from tableb;
commit tran;
option 3:
begin tran;
delete from tableb where exists (select 1 from tablea where tablea.id=tableb.id);
insert into tablea select * from tableb;
commit tran;
As for performance, so long as the id field(s) in tablea (the big table) are indexed, you should be fine.
Why are you using Upserts when he claims he wanted a MERGE? MAREG in SQL 2008 is faster and more efficient.
I would let the merge handle the differences.