How to retrieve SCOPE_IDENTITY of all inserts done in INSERT INTO [table] SELECT [col1, ...] [duplicate] - sql

This question already has answers here:
SQL Server - Return value after INSERT
(14 answers)
Closed 7 months ago.
Suppose I have a temp table with some cols one of which I have dedicated to identity column of the inserted Invoice and the others for inserting Invoice data itself. Like the following table :
CREATE TABLE #InvoiceItems
(
RowNumber INT, -- Used for inserting new invoice
SaleID INT, -- Used for inserting new invoice
BuyerID INT, -- Used for inserting new invoice
InvoiceID INT -- Intended for PK of the invoice added after inserting it
);
I use something like the following for inserting data into Invoice table
INSERT INTO [Invoice]
SELECT [col1, ...]
FROM #InvoiceItems
How can I achieve to fill the InvoiceID column while inserting table data into Invoice table using temp table? I know about SCOPE_IDENTITY() function but it returns the last inserted PK only which does not really suit my need.
I could also use a while to do this one by one but since the number of data I'm planning to insert is immense, I feel like it's not going to be the most optimized option.
Thanks for the answers in advance.

To grab multiple IDENTITY values from INSERT INTO SELECT FROM OUTPUT clause could be used:
-- temp table
CREATE TABLE #temp(col VARCHAR(100));
INSERT INTO #temp(col) VALUES ('A'), ('B'), ('C');
--target table
CREATE TABLE tab(
id INT IDENTITY,
col VARCHAR(100)
);
Main insert:
INSERT INTO tab(col)
OUTPUT inserted.id, inserted.col
SELECT col
FROM #temp;
The output could be also Inserted into another table using OUTPUT INTO:
CREATE TABLE #temp_identity(id INT);
INSERT INTO tab(col)
OUTPUT inserted.id
INTO #temp_identity
SELECT col
FROM #temp;
SELECT * FROM #temp_identity;
db<>fiddle demo

CREATE TABLE #InvoiceItems(
RowNumber INT,
SaleID INT,
BuyerID INT,
InvoiceID INT
);
INSERT INTO #InvoiceItems (RowNumber, SaleID, BuyerID) values (1, 55, 77)
INSERT INTO #InvoiceItems (RowNumber, SaleID, BuyerID) values (1, 56, 78)
INSERT INTO #InvoiceItems (RowNumber, SaleID, BuyerID) values (1, 57, 79)
INSERT INTO #InvoiceItems (RowNumber, SaleID, BuyerID) values (1, 58, 80)
INSERT INTO #InvoiceItems (RowNumber, SaleID, BuyerID) values (1, 59, 81)
DECLARE #Inserted table( RowNumber int,
SaleID INT,
BuyerID INT,
InvoiceID INT);
INSERT INTO dbo.[Invoice] (RowNumber, SaleID, BuyerID)
OUTPUT INSERTED.RowNumber, INSERTED.SaleID, INSERTED.BuyerID, INSERTED.InvoiceID
INTO #Inserted
SELECT RowNumber, SaleID, BuyerID
FROM #InvoiceItems
UPDATE ii
SET InvoiceID = ins.InvoiceID
FROM #InvoiceItems ii
JOIN #Inserted ins on ins.BuyerID = ii.BuyerID and ins.RowNumber = ii.RowNumber and ins.SaleID = ii.SaleID
SELECT * FROM #InvoiceItems

Related

Data Definition Language (DDL) Query SQL

I'm new in query and SQL server, and I wanted to know about DDL.
I have an attribute in my Table which has more than 1 value, such as
Size = {'S', 'M', 'L'}.
How could i make the attribute in my Table with query, so i can insert multi values to one of my attribute?
You don't want to do this, because this is denormalizing your data. But, if you must.
declare #table table (id int identity(1,1), size varchar(16))
insert into #table
values
('S')
,('M')
select * from #table
update #table
set size = size + ',M'
where id = 1
select * from #table
Here is a one to many approach with a foreign key
create table #items (id int identity(1,1), descrip varchar(64))
insert into #items
values
('shirt'),
('pants')
create table #item_sizes (id int identity(1,1), size char(1), item_id int)
alter table #item_sizes
add constraint FK_item foreign key (item_id) references #items(id)
insert into #item_sizes
values
('S',1)
,('M',1)
,('L',1)
,('S',2)
select
ItemID = i.id
,i.descrip
,isiz.size
from #items i
inner join #item_sizes isiz
on isiz.item_id = i.id
drop table #items, #item_sizes
As per your requirement :
Product (...,ProductSize);
INSERT INTO Product (ProductSize) VALUES ('S')
How to query the row of ProductSize so i can insert more than one values in SQL? –
you can query like below
select * from Product where ProductSize like '%S%' or ProductSize like '%L%' or ProductSize like '%M%'

SQL Server increment two tables in an update

First, SQL is NOT my strong point, as you'll see. I have one table that keeps track of the next item number, by some type, like so:
declare #maxs as table
(
Equip int,
NextId int
);
-- initial id values
insert into #maxs (Equip, NextId) values (400, 40);
insert into #maxs (Equip, NextId) values (500, 50);
If I create an item of type '400' then the next Id is 40, and that should be incremented to 41. In a case of a single add, that's easy enough. Our program does adds in batch, so here is my problem.
declare #t as table (Id int, Equip int, Descr varchar(20));
-- simulates the batch processing
insert into #t (Equip, Descr) values (400, 'Item 1');
insert into #t (Equip, Descr) values (400, 'Item 2');
insert into #t (Equip, Descr) values (500, 'Item 3');
-- generate the new id's in batch
UPDATE t
SET Id = (SELECT m.NextId + ROW_NUMBER() OVER (PARTITION BY t.Equip ORDER BY t.Equip))
FROM #t t
INNER JOIN #maxs m ON m.Equip = t.Equip
SELECT * FROM #t
This results in both Item 1 and Item 2 having the same Id because only 1 row is returned for 400, so ROW_NUMBER is the same for both. I need to be able to increment the NextId value in #maxs as well as update the entry in #t so that the second row that joins into the 400 value in #maxs will have the next value (almost like a x++ reference in c#). Is there a clean way to do that in SQL?
Thanks in advance.
Just go with JOIN and nested select
declare #t as table (Id int, Equip int, Descr varchar(20));
-- simulates the batch processing
insert into #t (Equip, Descr) values (400, 'Item 1');
insert into #t (Equip, Descr) values (400, 'Item 2');
insert into #t (Equip, Descr) values (500, 'Item 3');
-- generate the new id's in batch
UPDATE t
SET
Id = t.Equip + s.RowNum
FROM #t t
JOIN (select Equip,
Descr,
ROW_NUMBER() OVER (PARTITION BY Equip ORDER BY Equip) RowNum
from #t) s
on t.Equip = s.Equip and t.Descr = s.Descr
select * from #t
And if possible, try to switch from table variable to temporary table
You can do what you want with a CTE:
WITH toupdate as (
SELECT t.*,
ROW_NUMBER() OVER (PARTITION BY t.Equip ORDER BY t.Equip) as seqnum
FROM #t t
UPDATE t
SET
Id = m.NextId + seqnum
FROM toupdate t INNER JOIN
#maxs m
ON m.Equip = t.Equip;
I'm not sure this is a good idea, though. It is better to use identity columns to identify each row. There are definitely some cases, though, where numbering within a group is useful as a secondary key (for instance, line items on an invoice).

SQL Server: Insert batch with output clause

I'm trying the following
Insert number of records to table A with a table-valued parameter (tvp). This tvp has extra column(s) that are not in A
Get the inserted ids from A and the corresponding extra columns in the the tvp and add them to another table B
Here's what I tried
Type:
CREATE TYPE tvp AS TABLE
(
id int,
otherid int,
name nvarchar(50),
age int
);
Tables:
CREATE TABLE A (
[id_A] [int] IDENTITY(1,1) NOT NULL,
[name] [varchar](50),
[age] [int]
);
CREATE TABLE B (
[id_B] [int] IDENTITY(1,1) NOT NULL,
[id_A] [int],
[otherid] [int]
);
Insert:
DECLARE #a1 AS tvp;
DECLARE #a2 AS tvp
-- create a tvp (dummy data here - will be passed to as a param to an SP)
INSERT INTO #a1 (name, age, otherid) VALUES ('yy', 10, 99999), ('bb', 20, 88888);
INSERT INTO A (name, age)
OUTPUT
inserted.id_A,
inserted.name,
inserted.age,
a.otherid -- <== isn't accepted here
INTO #a2 (id, name, age, otherid)
SELECT name, age FROM #a1 a;
INSERT INTO B (id_A, otherid) SELECT id, otherid FROM #a2
However, this fails with The multi-part identifier "a.otherid" could not be bound., which I guess is expected because columns from other tables are not accepted for INSERT statement (https://msdn.microsoft.com/en-au/library/ms177564.aspx).
from_table_name
Is a column prefix that specifies a table included in the FROM clause of a DELETE, UPDATE, or MERGE statement that is used to specify the rows to update or delete.
So is there any other way to achieve this?
You cannot select value from a source table by using INTO operator.
Use OUTPUT clause in the MERGE command for such cases.
DECLARE #a1 AS tvp;
DECLARE #a2 AS tvp
INSERT INTO #a1 (name, age, otherid) VALUES ('yy', 10, 99999), ('bb', 20, 88888);
MERGE A a
USING #a1 a1
ON a1.id =a.[id_A]
WHEN NOT MATCHED THEN
INSERT (name, age)
VALUES (a1.name, a1.age)
OUTPUT inserted.id_A,
a1.otherId,
inserted.name,
inserted.age
INTO #a2;
INSERT INTO B (id_A, otherid) SELECT id, otherid FROM #a2

Insert a table variable into a temp table with multiple columns (ID, Number, etc.)

I need to insert multiple Table variables into one temp table.
One of the table variables is:
DECLARE ##TempTable_Number TABLE (Number bigint)
insert into ##TempTable_Number (Number) values ('000000000000');
insert into ##TempTable_Number (Number) values ('100000000000');
This works for inserting just one table variable
select * into ##GlobalTempTable_1 from ##TempTable_Number
I have a couple more table variables like
DECLARE ##TempTable_ID TABLE (Number int)
insert into ##TempTable_ID (ID) values ('1');
insert into ##TempTable_ID (ID) values ('12');
etc...
I tried this to insert data from multiple table variables into one TempTable:
Select * into ####GlobalTempTable_1 From ##TempTable_ID, ##TempTable_Number;
The query goes to a continuous loop...
EDIT:
One of the table variables is:
DECLARE ##TempTable_Number TABLE (Number bigint, ID int)
insert into ##gvTempTable (Number) values ('21212321332332');
insert into ##gvTempTable (Number) values ('100000000000');
insert into ##gvTempTable (ID) values ('1');
insert into ##gvTempTable (ID) values ('12');
select * into ##GlobalTempTable from ##gvTempTable;
select * from ##GlobalTempTable;
This returns a kind of a cartesian product
Use UNION ALL:
SELECT ID
INTO ##GlobalTempTable_1
FROM ##TempTable_ID
UNION ALL
SELECT Number
FROM ##TempTable_Number;
LiveDemo
Select * into ####GlobalTempTable_1 From ##TempTable_ID, ##TempTable_Number;
The query goes to a continuous loop...
It is probably not loop but very long query. Keep in mind that you do Cartesian product.
So your query is the same as:
SELECT *
INTO ##GlobalTempTable_1
FROM ##TempTable_ID
CROSS JOIN ##TempTable_Number;
And the result is NxM records where N is number of records in first table and M in the second.
Try like this,
DECLARE #TempTable TABLE (
ID INT
,Number BIGINT
)
INSERT INTO #TempTable (Number)
VALUES ('21212321332332');
INSERT INTO #TempTable (Number)
VALUES ('100000000000');
INSERT INTO #TempTable (ID)
VALUES ('1');
INSERT INTO #TempTable (ID)
VALUES ('12');
--select * into #GlobalTempTable from ##gvTempTable;
--select * from ##GlobalTempTable;
SELECT *
FROM #TempTable
SELECT A.ID
,B.Number
FROM (
SELECT ID
,ROW_NUMBER() OVER (
ORDER BY ID
) TempId
FROM #TempTable
WHERE id IS NOT NULL
) A
INNER JOIN (
SELECT number
,ROW_NUMBER() OVER (
ORDER BY id
) TempId
FROM #TempTable
WHERE number IS NOT NULL
) B ON A.TempId = B.TempId

INSERT multiple rows and OUTPUT original (source) values

I would like to INSERT multpile rows (using INSERT SELECT), and OUTPUT all the new and old IDs into a "mapping" table.
How can I get the original ID (or any source values) in the OUTPUT clause? I don't see a way to get any source values there.
Here is a minimal code example:
-- create some test data
declare #t table (id int identity, name nvarchar(max))
insert #t ([name]) values ('item 1')
insert #t ([name]) values ('another item')
-- duplicate items, storing a mapping from src ID => dest ID
declare #mapping table (srcid int, [newid] int)
insert #t ([name])
output ?????, inserted.id into #mapping-- I want to use source.ID but it's unavailable here.
select [name] from #t as source
-- show results
select * from #t
select * from #mapping
My actual scenario is more complex, so for example I cannot create a temp column on the data table in order to store a "original ID" temporarily, and I cannot uniquely identify items by anything other than the 'ID' column.
Interesting question. For your example, a possible cheat is to depend on the fact that you are doubling the number of rows. Assuming that rows are never deleted and the [id] column remains dense:
-- create some test data
declare #t table (id int identity, name nvarchar(max))
insert #t ([name]) values ('item 1')
insert #t ([name]) values ('another item')
-- duplicate items, storing a mapping from src ID => dest ID
declare #mapping table (srcid int, [newid] int)
declare #Rows as Int = ( select Count(42) from #t )
insert #t ([name])
output inserted.id - #Rows, inserted.id into #mapping
select [name] from #t as source order by source.id -- Note 'order by' clause.
-- show results
select * from #t
select * from #mapping