How to insert data from source table (but different columns) into two different tables in SQL Server - sql

I have a staging table titled [Staging]. The data from this table needs to be inserted into two separate tables. Half of the columns go to the first table (we'll call it [Table1]) and the other half go to a second table (we'll call it [Table2])
Both of these tables have a column titled "ChainID". In [Table1] the ChainID is an identity column. In [Table2] it's not.
The ChainID is the one column that links these two tables together for when we need to query this data.
I currently have it set up to where it will do the insert into [Table1] which then generates the new ChainIds. I can use "OUTPUT INSERTED.ChainID" to get the ChainId's that were generated but my problem is tying this back to the original staging table in order to grab the rest of the data for the second table.
DECLARE #Staging TABLE
(
[RowID] [int],
[ChainID] [varchar](50) ,
[LoanNo] [varchar](50) ,
[AssignmentFrom] [varchar](4000),
[AssignmentTo] [varchar](4000),
[CustodianUID] [nvarchar](100) null,
[DocCode] [nvarchar](100) null
)
INSERT
#Staging
SELECT
RowID,
ChainID,
LoanNo,
AssignmentFrom,
AssignmentTo,
CustodianUID,
DocCode
FROM
[MDR_CSV].[dbo].[TblCollateralAssignmentChainImport]
WHERE
UploadID = 1
This is where we do the insert into the first table which generates the new chainIds that will be needed to merge into Table2.
INSERT INTO
Table1
SELECT
LoanNo,
AssignmentFrom,
AssignmentTo,
CustodianUID
FROM
#Assignments AS MDRCA
WHERE
MDRCA.ChainID IS NULL
Now I need to insert the data from the DocCode field into Table2. I can get the list of newly generated ChainIds by doing something such as
OUTPUT INSERTED.ChainID
But that doesn't help being able to tie the newly generated chainId's back to the corresponding data rows from the Staging table in order to do the insert into Table2.

I ended up figuring out a solution. I used a curser to go through each row of data from the staging table one by one. That allowed me to do the insert into the first table (with only the pertinent columns from the staging table) along with doing an OUTPUT.INSTERTED ChainID and stored that newly generated ChainID into a table variable. I then assign that ChainID value to a regular variable by doing
#ChainID = (SELECT TOP 1 * FROM #tableVariable)
I could then use this ChainID to insert into the second table along with the rest of the data from the staging table that pertained to the same row of data (which was possible due to the curser)
Then before the curser loops I drop the table variable and recreate it back at the top of the loop so that the SELECT TOP 1 works correctly each time.
Not sure if it's the best or more efficient way to go about this but at least it worked!
Here's an example showing how I got it to work:
TblCollateralAssignmentChainImport is the Staging Table
temp_tblCollateralAssignment is Table1
temp_CustodianData is Table2
DECLARE #RowID int, #ChainID int;
DECLARE import_cur CURSOR FOR
SELECT RowID
FROM [MDR_CSV].[dbo].[TblCollateralAssignmentChainImport]
WHERE ChainId IS NULL
order by RowID;
OPEN import_cur
FETCH NEXT FROM import_cur
INTO #RowID
WHILE ##FETCH_STATUS = 0
BEGIN
DECLARE #NewlyCreatedChainId table (nChainId int);
INSERT INTO temp_tblCollateralAssignment
OUTPUT INSERTED.ChainID INTO #NewlyCreatedChainId
SELECT LoanNo, AssignmentFrom, AssignmentTo
FROM TblCollateralAssignmentChainImport
WHERE RowId = #RowID
SET #ChainID = (SELECT TOP 1 nChainId FROM #NewlyCreatedChainId)
INSERT INTO temp_CustodianData (ChainID, LoanNo, CustodianUID, DocCode)
SELECT #ChainID, import.LoanNo, import.CustodianUID, import.DocCode
FROM TblCollateralAssignmentChainImport AS import
WHERE RowId = #RowID
DELETE FROM #NewlyCreatedChainId
FETCH NEXT FROM import_cur
INTO #RowID
END
CLOSE import_cur;
DEALLOCATE import_cur;

Related

I want to fetch the last altered records from a table

I have a main table . I will get some real time records added to that table .I want to fetch all records which has been added ,altered or changed in previous existing records.
How can i Achieve this ?
You can use 2 commonly used approaches:
Track changes with another table through a trigger.
Should be something similar to this:
CREATE TABLE Tracking (
ID INT,
-- Your original table columns
TrackDate DATETIME DEFAULT GETDATE(),
TrackOperation VARCHAR(100))
GO
CREATE TRIGGER TrackingTrigger ON OriginalTable AFTER UPDATE, INSERT, DELETE
AS
BEGIN
INSERT INTO Tracking(
ID,
TrackOperation
-- Other columns
)
SELECT
ID = ISNULL(I.ID, D.ID),
TrackOperation = CASE
WHEN I.ID IS NOT NULL AND D.ID IS NOT NULL THEN 'Update'
WHEN I.ID IS NOT NULL THEN 'Insert'
ELSE 'Delete' END
-- Other columns
FROM
inserted AS I
FULL JOIN deleted AS D ON I.ID = D.ID -- ID is primary key
END
GO
Include CreatedDate, ModifiedDate and IsDeleted columns on your table. CreatedDate should have a default with current date, ModifiedDate should be updated each time your data is updated and IsDeleted should be flagged when you are deleting (and not actually being deleted). This option requires a lot more handling that the previous one, and you won't be able to track consecutive updates.
You have to search your table first from the sys.objects and grab that object id before using the usage_stats table.
declare #objectid int
select #objectid = object_id from sys.objects where name = 'YOURTABLENAME'
select top 1 * from sys.dm_db_index_usage_stats where object_id = #objectid
and last_user_update is not null
order by last_user_update
If you have Identity column in your table you may find last inserted row information through SQL query. And for that, we have multiple options like:
##IDENTITY
SCOPE_IDENTITY
IDENT_CURRENT
All three functions return last-generated identity values. However, the scope and session on which last is defined in each of these functions differ.

Insert Values from Table Variable into already EXISTING Temp Table

I'm successfully inserting values from Table Variable into new (not yet existing table) Temp Table. Have not issues when inserting small number of rows (eg. 10,000), but when inserting into a Table Variable a lot of rows (eg. 30,000) is throws an error "Server ran out of memory and external resources).
To walk around the issue:
I split my (60,000) Table Variable rows into small batches (eg. 10,000) each, thinking I could insert new data to already existing Temp Table, but I'm getting this error message:
There is already an object named '##TempTable' in the database.
My code is:
USE MyDataBase;
Go
Declare ##TableVariable TABLE
(
[ID] bigint PRIMARY KEY,
[BLD_ID] int NOT NULL
-- 25 more columns
)
Insert Into ##TableVariable VALUES
(1,25),
(2,30)
-- 61,000 more rows
Select * Into #TempTable From ##TableVariable;
Select Count(*) From #TempTable;
Below is the error message I'm getting
The problem is that SELECT INTO wants to create the destination table, so at second run you get the error.
first you have to create the #TempTable:
/* this creates the temptable copying the #TableVariable structure*/
Select *
Into #TempTable
From #TableVariable
where 1=0;
now you can loop through your batches and call this insert as many times you want..
insert Into #TempTable
Select * From #TableVariable;
pay attention that #TempTable is different from ##TempTable ( # = Local, ## = Global ) and remember to drop it when you have finished.
also you should NOT use ## for you table variable, use only #TableVariable
I hope this help

3 tables, 2 DBs, 1 Stored Procedure

I'm a novice when it comes to Stored Procedures in SQL Server Management Studio. I have an application that I was told to make the following changes to using a stored procedure:
Step 1. User types in an item number.
Step 2. Customer name, address, etc. displays in the other fields on the same form.
There are 3 tables: Bulk orders, Small orders, and Customer information.
Bulk orders and small orders are in Database_1 and Customer information is in Database_2.
The primary key for small orders is the order number. A column in small orders contains the customer number for each order. That customer number is the primary key in the customer table.
The bulk orders table is similar.
I want to include a conditional statement that says: if order number is found in small orders table, show data from customer table that coorelates with that order number. I've attempted this multiple ways, but keep getting a "The multi-part identifier.... could not be bound" error.
I.E:
SELECT DB1.db.Customer_Table.Customer_Column AS CustomerNumber;
IF(CustomerNumber NOT LIKE '%[a-z]%')
BEGIN
SELECT * FROM db.small_orders_table;
END
ELSE
BEGIN
SELECT * FROM db.buld_orders_table;
END
Please help.
Sounds like it's 2 databases on the same server...in that case, you'll need to specify the fully qualified table name (database.schema.table) when referencing a table on the other database from where your stored procedure is found.
Database_1.db.small_orders_tables
first of all, you cannot use aliases as variables. If you want to assign a value to a variable in order to test it, you have to do a SELECT statement like SELECT #var = DB1.db.Customer_Table.Customer_Column FROM <YourTableFullName> WHERE <condition>. Then you can use the #var (which must be declared before) for your test.
About the error you're experiencing, youre using fully qualified names in a wrong way. If you're on the same server (different databases), you need to specify just the database name on the top and then the schema of your objects. Suppose to have the following database objects on the Database1:
USE Database1;
GO
CREATE TABLE dbo.Table1
(
id int IDENTITY(1, 1) NOT NULL PRIMARY KEY CLUSTERED
, val varchar(30)
);
GO
INSERT INTO dbo.Table1 (val) VALUES ('test1');
GO
INSERT INTO dbo.Table1 (val) VALUES ('test2');
GO
INSERT INTO dbo.Table1 (val) VALUES ('test3');
GO
And the following ones on Database2:
USE Database2;
GO
CREATE TABLE dbo.Table2
(
id int IDENTITY(1, 1) NOT NULL PRIMARY KEY CLUSTERED
, val varchar(30)
);
GO
Now, suppose that you want to read from the first table the value with id = 2, and then to apply your IF. Let's declare a variable and test it:
USE Database1;
GO
DECLARE #var varchar(30);
-- since you're on Database1, you don't need to specify full name
SELECT #var = val FROM dbo.Table1 WHERE id = 2;
IF #var = 'test2'
BEGIN
SELECT id, val FROM dbo.Table1;
END
ELSE
BEGIN
-- in this case the database name is needed
SELECT id, val FROM Database2.dbo.Table2;
END
GO
Does it help?

SQL Query creating copy of old SQL entry with two values changed [duplicate]

This question already has answers here:
Quickest way to clone row in SQL
(5 answers)
Closed 8 years ago.
I have a table with around 50 to 60 cols (and counting), and I would like to know whether I can create a generic query for INSERT ... SELECT to copy one row, but with two cols changed.
More specifically, I want to fetch one global config from table configs and insert it into table configs with flag global set to false and new id auto-increment value.
Sth. like:
INSERT INTO configs
(SELECT TOP 1 * FROM configs WHERE global=1)
UPDATE global=0, id=?
(And of course the new autoincrement id should be returned to me, for I have to update the user's profile.)
Here is a fully functional solution with a demonstration of how it works. I'm assuming you are completing this action inside a stored procedure. I basically clone the current global=1 row into a temp table, then drop off the IDENTITY column so you can use SELECT * to reinsert the record. By using SELECT *, you will not have to update this whenever the column count increases.
-- setup demonstration with two sample columns of data
CREATE TABLE #configs (ID INT IDENTITY(100,1), [Global] INT, ColA CHAR(2), ColB VARCHAR(2));
-- fill with values
SET NOCOUNT ON;
INSERT #configs VALUES (1,'AA','BB');
INSERT #configs VALUES (1,'CC','DD');
INSERT #configs VALUES (1,'EF','GH');
SET NOCOUNT OFF;
-- This is the target ID we are working with
DECLARE #CloneID INT = 100;
-- Examine the ID
SELECT * FROM #configs WHERE ID=#CloneID;
-- This work should be completed in a transaction
BEGIN TRANSACTION;
-- copy current "global=1" record into a temp table and change its value to 0
SELECT * INTO #temp FROM #configs WHERE ID=#CloneID AND [Global]=1;
UPDATE #temp SET [Global]=0;
-- drop off the IDENTITY column so we can select it into main table again
ALTER TABLE #temp DROP COLUMN [ID];
-- copy the old "global=1" record back into main table, its value has been changed
INSERT #configs SELECT * FROM #temp;
COMMIT;
-- Examine
SELECT * FROM #configs;
-- cleanup
DROP TABLE #temp;
DROP TABLE #configs;

Get SCOPE_IDENTITY value when inserting bulk records for SQL TableType

I have following table structure, for convenience purpose I am only marking individual columns
Table_A (Id, Name, Desc)
Table_1 (Id this is identity column, Name....)
Table_2 (Id this is identity column, Table_A_Id, Table_1_Id)
The relationship between Table_1 and Table_2 is 1...*
Now I have created a table type for Table_A called TType_Table_A (which only contains Id as column and from my C# app I send multiple records). I have achieved this bulk insert functionality as desired.
What I need is when I insert records into Table_2 from TType_Table_A say with below statements, I would like to capture the Id of Table_2 for each record inserted
declare #count int = (select count(*) from #TType_Table_A); --a variable declared for TType_Table_A
if(#count > 0)
begin
insert into Table_2(Table_A_Id,Table_1_Id)
SELECT #SomeValue, #SomeValueAsParameter FROM #TType_Table_A;
end;
Now say if 2 records are inserted, I would like to capture the Id for each of these 2 records.
Any input/help is appreciated
This is what I know how it can be achieved, but I want to reduce DB calls from my app or user cursor in stored procedure
Insert record in Table_1 and return back the Id Loop.....through records and insert record in Table_2 and return back the Id
OR
Use cursor in stored procedure when inserting/selecting from TableType
I assume this is Sql Server? Then you can make use of the OUTPUT clause, like so:
declare #NewId table (MyNewId INT)
insert into Table_2(Table_A_Id,Table_1_Id)
output INSERTED.MyNewId INTO #TempTable(MyNewID)
SELECT SomeValue, SomeValueAsParameter FROM #TType_Table_A;
SELECT * FROM #NewId