Insert Multiple Rows into Table from a Table - sql

I have a SQL Server 2008 database. The database has a stored procedure which receives two strings as parameters. One parameter is used to build a temp table which will usually only have 1 or 2 rows but theoretically could have more.
For each row in the temp table, I need to insert a row into a different table that consists of the other parameter and the contents of the temp table. Is there a way to do this without a cursor?
I've tried variations on the following:
Pseudo code:
procedure InsertLinks(#Key varchar(36), #LinkKey varchar(36)
tempLinks Table = getLinks(#LinkKey)
Insert into MyTable (Key, LinksTo) Values (#Key, Select LinksTo From tempLinks)

The VALUES clause is messed up - you have a single value comma a table. That's not valid.
The following should work just fine:
INSERT INTO MyTable (Key, LinksTo)
SELECT #Key, LinksTo
FROM tempLinks

Related

How to copy some records of table and change some columns before insert into this table again in sql server?

In my SQL Server table, I have a table whose PK is GUID with lots of records already.
Now I want to add records which only needs to change the COMMON_ID and COMMON_ASSET_TYPE column of some existing records.
select * from My_Table where COMMON_ASSET_TYPE = "ASSET"
I am writing sql to copy above query result, changing COMMON_ID value to new GUID value and COMMON_ASSET_TYPE value from "ASSET" to "USER", then insert the new result into My_Table.
I do not know how to write it since now I feel it is a trouble to insert records manually.
Update:
I have far more columns in table and most of them are not nullable, I want to keep all these columns' data for new records except above two columns.Is there any way if I do not have to write all these column names in sql?
Try to use NEWID if you want to create new guid:
INSERT INTO dbo.YourTable
(
COMMON_ID,
COMMON_ASSET_TYPE
)
select NEWID(), 'User' as Common_Asset_Type
from My_Table
where COMMON_ASSET_TYPE = "ASSET"
UPDATE:
As a good practice I would suggest to write all column names explicitly to have a clean and clear insert statement. However, you can use the following construction, but it is not advisable in my opinion:
insert into table_One
select
id
, isnull(name,'Jon')
from table_Two
INSERT INTO My_Table (COMMON_ID,COMMON_LIMIT_IDENTITY, COMMON_CLASS_ID,COMMON_ASSET_TYPE)
SELECT NEWID(), COMMON_LIMIT_IDENTITY, COMMON_CLASS_ID,'USER'
FROM My_Table
WHERE COMMON_ASSET_TYPE = 'ASSET'
If I've understood correctly you want to take existing records in your table, modify them, and insert them as new records in the same table.
I'll assume ID column contains the the GUID?
I'd first create a temporary table
CREATE TABLE #myTempTable(
ID UNIQUEIDENTIFIER,
Name varchar(max),
... etc
);
Fill this temp table with the records to change with your SELECT statement.
Change the records in the temp table using UPDATE statement.
Finally, Insert those "new" records back into the primary table. with INSERT INTO SELECT statement.
You will probably have to sandwitch the INSERT INTO SELECT with IDENTITY_INSERT (on/off):
SET IDENTITY_INSERT schema_name.table_name ON
SET IDENTITY_INSERT schema_name.table_name OFF
IDENTITY_INSERT "Allows explicit values to be inserted into the identity column of a table."

SQL Increment the ID Column in Merge Statement

I am using Merge in sql server and when not matched i am inserting the values into the destination table. the destination table has a Unique ID Column with Values ID1,ID2,ID3,...etc.
Whenever i insert it using merge i call out a scalar valued function which increments the last value in table by 1 and returns it. When i call that function in Insert statement the entries inserted in Merge gets same ID. How can i Overcome this.
MERGE INTO Test
USING
(
SELECT Name,UserName
from Test1
) AS Src
ON 1 = 0
WHEN NOT MATCHED BY TARGET THEN
INSERT (Id,Name,UserName)
VALUES ((SELECT GETID()),Name,UserName)
so when i run this code, even i get 10 entries in the table. all entries gets the same id.

Oracle set based insert vs set based merge performance

We're using Oracle 11g at the moment without Enterprise (not an option unfortunately).
Let's say I have a table with a constant(Let's say 2000) rows of data. Let's call it data_source.
I want to insert some columns of this table into another table, data_dest. I'm using all the records from the source table.
In other words, I would like to insert this set
select data_source.col1, data_source.col2, ... data_source.colN
from data_source
Which would be faster in this case:
insert into data_dest
select data_source.col1, data_source.col2, ... data_source.colN
from data_source
OR
merge into data_dest dd
using data_source ds
on (dd.col1 = ds.col1) --Let's assume that this is a matching column names
when not matched
insert (col1,col2...)
values(ds.col1,ds.col2...)
EDIT 1:
We can assume there are no primary keys violations from the insert.
In other words we can assume that insert will successfully insert all of the rows and so will merge.
The insert is very likely faster because it does not require a join on the two tables.
That said, the two queries are not equivalent. Assuming that col1 is defined as the primary key, the insert will throw an error if data_source contains a value in col1 that is already in data_dest. Because the merge is comparing the data in the two tables, then only inserting only the rows that don't already exist, it won't ever throw a primary key violation.
An insert that would be equivalent to the merge would be:
INSERT INTO data_dest
SELECT data_source.col1, data_source.col2, ... data_source.colN
FROM data_source
WHERE NOT EXISTS
(SELECT *
FROM data_dest
WHERE data_source.col1 = data_dest.col1)
It's likely that the plan for this insert will be very similar (if not identical) to the plan for the merge and the performance would be indistinguishable.

Multiple row insert into two tables avoiding loops

I have a set of value which have to be inserted into two tables.Input has say 5 row and I have to insert these 5 rows into table A first.Table A has a identity column.Next i have to insert these 5 rows into table B with an extra column which is the identity from table A.
How this can be done with out using any loops?
Any help will be highly helpful.
INSERT INTO TABLE_A(COL2,COL3)
SELECT COL2,COL3 FROM #TEMP_TAB
set #identityval=##identity
INSERT INTO TABLE_B(COLA,COLB,COLC)
SELECT #identityval,COL2,COL3,COL4 FROM #TEMP_TAB
You cannot insert into multiple tables using a single statment.
What you could do is create an insert trigger on Table A so that after the insert occurs this performs the new insert with the identity of the value inserted into Table A and insert it into Table B.
Here is one solution.
take max of identity column from table TABLE_A
insert new records in table TABLE_A
then insert records on TABLE_B from TABLE_A with Identity greater than last max identity.
Thanks,
Gopal
What you want to do is not possible.
You can get only the value from the last insert using the ##identity variable. This way its possible to add to multiple tables setting the right foreign key without selecting the just inserted row again using a cursor. This approach is not useful when inserting multiple rows at once.
From the documentation:
Use the ##identity global variable to retrieve the last value inserted into an IDENTITY column. The value of ##identity changes each time an insert or select into attempts to insert a row into a table.
Here is a procedure which inserts a single row and you can use the return value to create a reference to the inserted data in another table:
create procedure reset_id as
set identity_insert sales_daily on
insert into sales_daily (syb_identity, stor_id)
values (102, "1349")
select ##identity
select ##identity
execute reset_id

large insert in two tables. First table will feed second table with its generated Id

One question about how to t-sql program the following query:
Table 1
I insert 400.000 mobilephonenumbers in a table with two columns. The number to insert and identity id.
Table 2
The second table is called SendList. It is a list with 3columns, a identity id, a List id, and a phonenumberid.
Table 3
Is called ListInfo and contains PK list id. and info about the list.
My question is how should I using T-sql:
Insert large list with phonenumbers to table 1, insert the generated id from the insert of phonenum. in table1, to table 2. AND in a optimized way. It cant take long time, that is my problem.
Greatly appreciated if someone could guide me on this one.
Thanks
Sebastian
What version of SQL Server are you using? If you are using 2008 you can use the OUTPUT clause to insert multiple records and output all the identity records to a table variable. Then you can use this to insert to the child tables.
DECLARE #MyTableVar table(MyID int);
INSERT MyTabLe (field1, field2)
OUTPUT INSERTED.MyID
INTO #MyTableVar
select Field1, Field2 from MyOtherTable where field3 = 'test'
--Display the result set of the table variable.
Insert MyChildTable (myID,field1, field2)
Select MyID, test, getdate() from #MyTableVar
I've not tried this directly with a bulk insert, but you could always bulkinsert to a staging table and then use the processs, described above. Inserting groups of records is much much faster than one at a time.