Insert a record into 2 tables in SQL Server using comma separated value - sql

I have 2 table A and table B; table B is linked to table A through a foreign key.
TABLE A has a structure somewhat like this
PK Id
DeliveryChannelValue
DeliverychannelId
Date time
Table B has this structure
PK Id UniqueIdentifiers
Date time
FK tableA id
Now in a stored procedure, I get unique identifiers as comma separated value, so based on the number of items in that list, I have to create the same number of rows in table A and in table B.
If the number of items in comma separated value is 3, then there will be 3 rows to be inserted into table A and 3 rows into table B. I am trying to avoid a cursor.
Please suggest efficient way to do this.

You can use this CodeProject project split function to separate the values, and then use a known DateTime stamp to keep the tables in sync. This assumes these values aren't constantly updating which could cause a DateTime duplication issue: if that's the case, you'll need to use a add a GUID value in place of the YOURDATE field, below:
DECLARE #DATESTAMP DATETIME = GETDATE()
INSERT INTO TABLE_A (ID, YOURDATE)
SELECT item, #DATESTAMP
FROM dbo.[FN_SPLIT](#yourinputstring)
GO
INSERT INTO TABLE_B(YOURDATE, TABLE_A_ID)
SELECT #DATESTAMP, ID
FROM TABLE_A
WHERE YOURDATE = #DATESTAMP
GO

Related

How to copy some records of table and change some columns before insert into this table again in sql server?

In my SQL Server table, I have a table whose PK is GUID with lots of records already.
Now I want to add records which only needs to change the COMMON_ID and COMMON_ASSET_TYPE column of some existing records.
select * from My_Table where COMMON_ASSET_TYPE = "ASSET"
I am writing sql to copy above query result, changing COMMON_ID value to new GUID value and COMMON_ASSET_TYPE value from "ASSET" to "USER", then insert the new result into My_Table.
I do not know how to write it since now I feel it is a trouble to insert records manually.
Update:
I have far more columns in table and most of them are not nullable, I want to keep all these columns' data for new records except above two columns.Is there any way if I do not have to write all these column names in sql?
Try to use NEWID if you want to create new guid:
INSERT INTO dbo.YourTable
(
COMMON_ID,
COMMON_ASSET_TYPE
)
select NEWID(), 'User' as Common_Asset_Type
from My_Table
where COMMON_ASSET_TYPE = "ASSET"
UPDATE:
As a good practice I would suggest to write all column names explicitly to have a clean and clear insert statement. However, you can use the following construction, but it is not advisable in my opinion:
insert into table_One
select
id
, isnull(name,'Jon')
from table_Two
INSERT INTO My_Table (COMMON_ID,COMMON_LIMIT_IDENTITY, COMMON_CLASS_ID,COMMON_ASSET_TYPE)
SELECT NEWID(), COMMON_LIMIT_IDENTITY, COMMON_CLASS_ID,'USER'
FROM My_Table
WHERE COMMON_ASSET_TYPE = 'ASSET'
If I've understood correctly you want to take existing records in your table, modify them, and insert them as new records in the same table.
I'll assume ID column contains the the GUID?
I'd first create a temporary table
CREATE TABLE #myTempTable(
ID UNIQUEIDENTIFIER,
Name varchar(max),
... etc
);
Fill this temp table with the records to change with your SELECT statement.
Change the records in the temp table using UPDATE statement.
Finally, Insert those "new" records back into the primary table. with INSERT INTO SELECT statement.
You will probably have to sandwitch the INSERT INTO SELECT with IDENTITY_INSERT (on/off):
SET IDENTITY_INSERT schema_name.table_name ON
SET IDENTITY_INSERT schema_name.table_name OFF
IDENTITY_INSERT "Allows explicit values to be inserted into the identity column of a table."

SQL Insert from one TVP into two tables, using scope identity from first for second table

I have SQL TVP object with multiple records (for example 2 records).
I need to insert these records into two almost identical tables, the only difference is that second table has one more column which is foreign key pointing to first table. So it should loop TVP records and insert one by one into both tables, but getting scope_identity() of inserted record in first table and use it for record in second table.
1st iteration
insert into first table
get scope_identity() of inserted record
insert into second table (using scope indentity from first table to fill additional column)
And so on, depending on how many records are in TVP.
How can I achieve this?
Obviously I have left out a ton of code since we don't have your column and table names etc. You want an ID value in your TVP so you can count rows and use it in a where clause and while loop.
Declare #Var1 Int
Declare #YourTVP YourTVPName
Declare #RowCounter Int = 1
While (1=1)
Insert Into YourTable1 (Column1, ...)
Select (Column1, ...)
From #YourTVP
Where #RowCounter = SomeIDColumn
#Var1 = Select ##SCOPE_IDENTITY()
Insert Into YourTable2 (Column1, ...)
(#Var1, ...)
If (Some logic to Break your While loop)
Break
Else #RowCounter = #RowCounter + 1
End
Ok, let me be more clear. I will give demonstrative example::
I have TVP (let name it as PersonTVP) containing FirstName and LastName columns and assume PersonTVP has two records.
I have two tables, Person and PersonExtra. Person table has Id, FirstName and LastName columns, and PersonExtra has same columns + one additional column PersonId.
I need to insert data from PersonTVP into these two tables. Flow should be:
Take record from PersonTVP and insert into Person table
Get Scope_Identity() of inserted record (the value from Id column)
Insert same record into PersonExtra table and use Scope_Identity() for PersonId column (additional column)
And so on, loop as long as PersonTVP has records.

Moving data between 2 tables on columns with different datatypes

I have 2 tables in 2 different databases. The first table (Custumers) Has many data with 10-12 coulmns.
Then I have the Second table(CustumersNew), it has new columns that should represent the same columns as Custumers just with different names and datatypes. CustumersNew is currently empty. I want to move all of the data from table Custumers to table CustumersNew.
The thing here is that table Custumers UserID column has the datatype uniqueidentifier
and the CustumerNew ID column has the datatype int. So as the rest of the coulmns, they sinply do not match in datatypes.
How do i move the data from A to B?
EDIT:
I'm using MS-SQL
I would use the INSERT INTO CustumersNew(<column list>) SELECT <column list from Custumers CONVERTed to data types that match the data type of corresponding columns in CustumersNew> FROM Custumers statement.
E.g.
INSERT INTO CustumersNew(UserId, Name, Age)
SELECT UserId, CONVERT(NVARCHAR(128), Name), CONVERT(INT, Age)
FROM Custumers
I am assuming that Name and Age are of different types in these two tables. You would need to write a similar convert statements where the data type argument should match the data type in the CustumersNew table.
Since UserId/CustomerId being uniqueidentifier cannot be mapped to integer and I doubt the relevance of the values in this column from a functional perspective, I would model the UserId/CustomerId as an AUTO/IDENTITY column in the new table.
Well, you can't store a uniqueidentifier in an int column so you'll have to come up with a new set of keys.
Most database systems provide a mechanism for sequentially numbering records with integer values. In SQL Server, they use IDENTITY columns, in Oracle they use sequences, I think that in MySql you specify the column as auto_increment.
Once you have set up your new table (with it's auto-numbering scheme) then simply insert the data using SQL:
INSERT INTO CUSTOMERS_NEW (COL2, COL3, COL4)
SELECT COL2, COL3, COL4 FROM CUSTOMERS
Notice that the insert statement does not include the ID column - that should be populated automatically for you.
if you are not able to use insert and have to use update it will be like this
UPDATE change
SET widget_id = (SELECT insert_widget.widget_id
FROM insert_widget
WHERE change.id = insert_widget.id)
WHERE change.id = (SELECT insert_widget.id
FROM insert_widget
WHERE change.id = insert_widget.id)
in this example, I wanted to move widget_id column form insert_widget table to change table, but change table already has data, so I have to use update statement

Insert into a row at specific position into SQL server table with PK

I want to insert a row into a SQL server table at a specific position. For example my table has 100 rows and I want to insert a new row at position 9. But the ID column which is PK for the table already has a row with ID 9. How can I insert a row at this position so that all the rows after it shift to next position?
Relational tables have no 'position'. As an optimization, an index will sort rows by the specified key, if you wish to insert a row at a specific rank in the key order, insert it with a key that sorts in that rank position. In your case you'll have to update all rows with a value if ID greater than 8 to increment ID with 1, then insert the ID with value 9:
UPDATE TABLE table SET ID += 1 WHERE ID >= 9;
INSERT INTO TABLE (ID, ...) VALUES (9, ...);
Needless to say, there cannot possibly be any sane reason for doing something like that. If you would truly have such a requirement, then you would use a composite key with two (or more) parts. Such a key would allow you to insert subkeys so that it sorts in the desired order. But much more likely your problem can be solved exclusively by specifying a correct ORDER BY, w/o messing with the physical order of the rows.
Another way to look at it is to reconsider what primary key means: the identifier of an entity, which does not change during that entity lifetime. Then your question can be rephrased in a way that makes the fallacy in your question more obvious:
I want to change the content of the entity with ID 9 to some new
value. The old values of the entity 9 should be moved to the content
of entity with ID 10. The old content of entity with ID 10 should be
moved to the entity with ID 11... and so on and so forth. The old
content of the entity with the highest ID should be inserted as a new
entity.
Usually you do not want to use primary keys this way. A better approach would be to create another column called 'position' or similar where you can keep track of your own ordering system.
To perform the shifting you could run a query like this:
UPDATE table SET id = id + 1 WHERE id >= 9
This do not work if your column uses auto_increment functionality.
No, you can't control where the new row is inserted. Actually, you don't need to: use the ORDER BY clause on your SELECT statements to order the results the way you need.
DECLARE #duplicateTable4 TABLE (id int,data VARCHAR(20))
INSERT INTO #duplicateTable4 VALUES (1,'not duplicate row')
INSERT INTO #duplicateTable4 VALUES (2,'duplicate row')
INSERT INTO #duplicateTable4 VALUES (3,'duplicate rows')
INSERT INTO #duplicateTable4 VALUES (4,'second duplicate row')
INSERT INTO #duplicateTable4 VALUES (5,'second duplicat rows')
DECLARE #duplicateTable5 TABLE (id int,data VARCHAR(20))
insert into #duplicateTable5 select *from #duplicateTable4
delete from #duplicateTable4
declare #i int , #cnt int
set #i=1
set #cnt=(select count(*) from #duplicateTable5)
while(#i<=#cnt)
begin
if #i=1
begin
insert into #duplicateTable4(id,data) select 11,'indian'
insert into #duplicateTable4(id,data) select id,data from #duplicateTable5 where id=#i
end
else
insert into #duplicateTable4(id,data) select id,data from #duplicateTable5 where id=#i
set #i=#i+1
end
select *from #duplicateTable4
This kind of violates the purpose of a relational table, but if you need, it's not really that hard to do.
1) use ROW_NUMBER() OVER(ORDER BY NameOfColumnToSort ASC) AS Row to make a column for the row numbers in your table.
2) From here you can copy (using SELECT columnsYouNeed INTO ) the before and after portions of the table into two separate tables (based on which row number you want to insert your values after) using a WHERE Row < ## and Row >= ## statement respectively.
3) Next you drop the original table using DROP TABLE.
4) Then you use a UNION for the before table, the row you want to insert (using a single explicitly defined SELECT statement without anything else), and the after table. By now you have two UNION statements for 3 separate select clauses. Here you can just wrap this in a SELECT INTO FROM clause calling it the name of your original table.
5) Last, you DROP TABLE the two tables you made.
This is similar to how an ALTER TABLE works.
INSERT INTO customers
(customer_id, last_name, first_name)
SELECT employee_number AS customer_id, last_name, first_name
FROM employees
WHERE employee_number < 1003;
FOR MORE REF: https://www.techonthenet.com/sql/insert.php

Row number in Sybase tables

Sybase db tables do not have a concept of self updating row numbers. However , for one of the modules , I require the presence of rownumber corresponding to each row in the database such that max(Column) would always tell me the number of rows in the table.
I thought I'll introduce an int column and keep updating this column to keep track of the row number. However I'm having problems in updating this column in case of deletes. What sql should I use in delete trigger to update this column?
You can easily assign a unique number to each row by using an identity column. The identity can be a numeric or an integer (in ASE12+).
This will almost do what you require. There are certain circumstances in which you will get a gap in the identity sequence. (These are called "identity gaps", the best discussion on them is here). Also deletes will cause gaps in the sequence as you've identified.
Why do you need to use max(col) to get the number of rows in the table, when you could just use count(*)? If you're trying to get the last row from the table, then you can do
select * from table where column = (select max(column) from table).
Regarding the delete trigger to update a manually managed column, I think this would be a potential source of deadlocks, and many performance issues. Imagine you have 1 million rows in your table, and you delete row 1, that's 999999 rows you now have to update to subtract 1 from the id.
Delete trigger
CREATE TRIGGER tigger ON myTable FOR DELETE
AS
update myTable
set id = id - (select count(*) from deleted d where d.id < t.id)
from myTable t
To avoid locking problems
You could add an extra table (which joins to your primary table) like this:
CREATE TABLE rowCounter
(id int, -- foreign key to main table
rownum int)
... and use the rownum field from this table.
If you put the delete trigger on this table then you would hugely reduce the potential for locking problems.
Approximate solution?
Does the table need to keep its rownumbers up to date all the time?
If not, you could have a job which runs every minute or so, which checks for gaps in the rownum, and does an update.
Question: do the rownumbers have to reflect the order in which rows were inserted?
If not, you could do far fewer updates, but only updating the most recent rows, "moving" them into gaps.
Leave a comment if you would like me to post any SQL for these ideas.
I'm not sure why you would want to do this. You could experiment with using temporary tables and "select into" with an Identity column like below.
create table test
(
col1 int,
col2 varchar(3)
)
insert into test values (100, "abc")
insert into test values (111, "def")
insert into test values (222, "ghi")
insert into test values (300, "jkl")
insert into test values (400, "mno")
select rank = identity(10), col1 into #t1 from Test
select * from #t1
delete from test where col2="ghi"
select rank = identity(10), col1 into #t2 from Test
select * from #t2
drop table test
drop table #t1
drop table #t2
This would give you a dynamic id (of sorts)