Comma-separated value insertion In SQL Server 2005 - sql

How can I insert values from a comma-separated input parameter with a stored procedure?
For example:
exec StoredProcedure Name 17,'127,204,110,198',7,'162,170,163,170'
you can see that I have two comma-separated value lists in the parameter list. Both will have the same number of values: if the first has 5 comma-separated values, then the second one also has 5 comma-separated values.
127 and 162 are related
204 and 170 are related
...and same for the others.
How can I insert these two values?
One comma-separated value is inserted, but how do I insert two?

Have a lok at something like (Full Example)
DECLARE #Inserts TABLE(
ID INT,
Val1 INT,
Val2 INT,
Val3 INT
)
DECLARE #Param1 INT,
#Param2 VARCHAR(100),
#Param3 INT,
#Param4 VARCHAR(100)
SELECT #Param1 = 17,
#Param2 = '127,204,110,198',
#Param3 = 7,
#Param4 = '162,170,163,170'
DECLARE #Table1 TABLE(
ID INT IDENTITY(1,1),
Val INT
)
DECLARE #Table2 TABLE(
ID INT IDENTITY(1,1),
Val INT
)
DECLARE #textXML XML
SELECT #textXML = CAST('<d>' + REPLACE(#Param2, ',', '</d><d>') + '</d>' AS XML)
INSERT INTO #Table1
SELECT T.split.value('.', 'nvarchar(max)') AS data
FROM #textXML.nodes('/d') T(split)
SELECT #textXML = CAST('<d>' + REPLACE(#Param4, ',', '</d><d>') + '</d>' AS XML)
INSERT INTO #Table2
SELECT T.split.value('.', 'nvarchar(max)') AS data
FROM #textXML.nodes('/d') T(split)
INSERT INTO #Inserts
SELECT #Param1,
t1.Val,
#Param3,
t2.Val
FROM #Table1 t1 INNER JOIN
#Table2 t2 ON t1.ID = t2.ID
SELECT *
FROM #Inserts

You need a way to split and process the string in TSQL, there are many ways to do this. This article covers the PROs and CONs of just about every method:
"Arrays and Lists in SQL Server 2005 and Beyond, When Table Value Parameters Do Not Cut it" by Erland Sommarskog
You need to create a split function. This is how a split function can be used:
SELECT
*
FROM YourTable y
INNER JOIN dbo.yourSplitFunction(#Parameter) s ON y.ID=s.Value
I prefer the number table approach to split a string in TSQL but there are numerous ways to split strings in SQL Server, see the previous link, which explains the PROs and CONs of each.
For the Numbers Table method to work, you need to do this one time table setup, which will create a table Numbers that contains rows from 1 to 10,000:
SELECT TOP 10000 IDENTITY(int,1,1) AS Number
INTO Numbers
FROM sys.objects s1
CROSS JOIN sys.objects s2
ALTER TABLE Numbers ADD CONSTRAINT PK_Numbers PRIMARY KEY CLUSTERED (Number)
Once the Numbers table is set up, create this split function:
CREATE FUNCTION [dbo].[FN_ListToTableRows]
(
#SplitOn char(1) --REQUIRED, the character to split the #List string on
,#List varchar(8000)--REQUIRED, the list to split apart
)
RETURNS TABLE
AS
RETURN
(
----------------
--SINGLE QUERY-- --this will return empty rows, and row numbers
----------------
SELECT
ROW_NUMBER() OVER(ORDER BY number) AS RowNumber
,LTRIM(RTRIM(SUBSTRING(ListValue, number+1, CHARINDEX(#SplitOn, ListValue, number+1)-number - 1))) AS ListValue
FROM (
SELECT #SplitOn + #List + #SplitOn AS ListValue
) AS InnerQuery
INNER JOIN Numbers n ON n.Number < LEN(InnerQuery.ListValue)
WHERE SUBSTRING(ListValue, number, 1) = #SplitOn
);
GO
You can now easily split a CSV string into a table and join on it. To accomplish your task, set up a test table to insert into:
create table YourTable (col1 int, col2 int)
then create your procedure:
CREATE PROCEDURE StoredProcedureName
(
#Params1 int
,#Array1 varchar(8000)
,#Params2 int
,#Array2 varchar(8000)
)
AS
INSERT INTO YourTable
(col1, col2)
SELECT
a1.ListValue, a2.ListValue
FROM dbo.FN_ListToTableRows(',',#Array1) a1
INNER JOIN dbo.FN_ListToTableRows(',',#Array2) a2 ON a1.RowNumber=a2.RowNumber
GO
test it out:
exec StoredProcedureName 17,'127,204,110,198',7,'162,170,163,170'
select * from YourTable
OUTPUT:
(4 row(s) affected)
col1 col2
----------- -----------
127 162
204 170
110 163
198 170
(4 row(s) affected)

This may not be an answer to your question... But I thought of letting you know that there is a better way to pass related values (Table Format) to a stored procedure... XML... You can build the XML string in your app (just as regular string) and pass it on to the stored procedure as a parameter... You can then use the following syntax to get it into a table. Hope this helps... In this way you can pass an entire table as parameter to stored procedure...
--Parameters
#param1 int,
#Budgets xml,
#Param2 int
-- #Budgets = '<Values><Row><Val1>127</Val1><Val2>162</Val2></Row> <Row><Val1>204</Val1><Val2>170</Val2></Row></Values>'
SELECT #param1 as Param1,
x.query('Val1').value('.','int') as val1,
#param3 as Param3,
x.query('Val2').value('.','int') as val1,
into #NewTable
FROM #Budgets.nodes('/Values/Row') x1(x)

Related

Updating a json array IN SQL Server table

I have an array of json in a SQL Server column, I am trying to update all names to 'Joe'.
I tried the below code , but it is updating only first element of the json array
CREATE TABLE #t (I INT, JsonColumn NVARCHAR(MAX) CHECK (ISJSON(JsonColumn) > 0))
INSERT INTO #t
VALUES (1, '[{"id":"101","name":"John"}, {"id":"102","name":"peter"}]')
INSERT INTO #t VALUES (2,'[{"id":"103","name":"dave"}, {"id":"104","name":"mark"}]')
SELECT * FROM #t
SELECT * FROM #t
CROSS APPLY OPENJSON(JsonColumn) s
WITH cte AS
(
SELECT *
FROM #t
CROSS APPLY OPENJSON(JsonColumn) s
)
UPDATE cte
SET JsonColumn = JSON_MODIFY(JsonColumn, '$[' + cte.[key] + '].name', 'Joe')
SELECT * FROM #t
-- DROP TABLE #t
It is only updating the first element of array to joe
Current result:
[{"id":"101","name":"Joe"}, {"id":"102","name":"cd"}]
[{"id":"103","name":"Joe"}, {"id":"104","name":"mark"}]
Expected
[{"id":"101","name":"Joe"}, {"id":"102","name":"Joe"}]
[{"id":"103","name":"Joe"}, {"id":"104","name":"Joe"}]
Since you want to do in one transaction, I could not think of any other ways than to create another table and store the values into new table and use for XML path with the value. Problem is you are trying to update JSON array and I am not sure how would you update the same row twice with different value. With cross apply as you have shown it creates two rows and then only you can update it to JOE.
Your query will update name = Joe for ID = 101 for first row, and Name = Joe for ID = 102 based on value column. Since these are on two different rows you are seeing only one change in your temp table.
I created one more #temp2 table to store those values and use XML path to concatenate. The final table will be #t2 table for your expected results.
SELECT *
into #t2
FROM #t
CROSS APPLY OPENJSON(JsonColumn) s
select *, json_value (value, '$.name') from #t2
UPDATE #t2
SET value = JSON_MODIFY(value, '$.name', 'Joe')
select t.I ,
JSONValue = concat('[',stuff((select ',' + value from #t2 t1
where t1.i = t.i
for XML path('')),1,1,''),']')
from #t2 t
group by t.I
Output:
I JSONValue
1 [{"id":"101","name":"Joe"},{"id":"102","name":"Joe"}]
Updating original table:
update t
set t.JsonColumn =t2.JSONValue
from #t t
join (select t.I ,
JSONValue = concat('[',stuff((select ',' + value from #t2 t1
where t1.i = t.i
for XML path('')),1,1,''),']')
from #t2 t
group by t.I ) t2 on t.I = t2.i
I think that it is impossible to apply more updates to one record with one command. So you need to explode JSON array to records.
You can do this with a Temporary or Variable Table and a Cursor.
-- Declare the Variable Table
DECLARE #JsonTable TABLE (
RecordKey UNIQUEIDENTIFIER,
ArrayIndex INT,
ObjKey NVARCHAR(100),
ObjValue NVARCHAR(1000)
);
-- Fill the Variable Table
INSERT INTO #JsonTable
SELECT TB1.pk as RecordKey,
TB1data.[key] AS ArrayIndex,
TB1dataItem.[key] as ObjKey,
TB1dataItem.[value] as ObjValue
FROM MyTable TB1
CROSS APPLY OPENJSON(JSON_QUERY(TB1.data, '$.list')) TB1data
CROSS APPLY OPENJSON(JSON_QUERY(TB1data.value, '$')) TB1dataItem
WHERE TB1dataItem.[key] = 'name'
-- Declare Cursor and relative variables
DECLARE #recordKey UNIQUEIDENTIFIER,
#recordData NVARCHAR(MAX),
#arrayIndex INT,
#objKey NVARCHAR(100),
#objValue NVARCHAR(1000);
DECLARE JsonCursor CURSOR FAST_FORWARD READ_ONLY FOR
SELECT * FROM #JsonTable;
-- Use Cursor to read any json array item
OPEN JsonCursor;
FETCH NEXT
FROM JsonCursor
INTO #recordKey, #arrayIndex, #objKey, #objValue;
WHILE ##FETCH_STATUS = 0 BEGIN
UPDATE TB1
SET data = JSON_MODIFY(
data,
'$.list[' + CAST(#arrayIndex as VARCHAR(20)) + '].name',
'Joe'
)
FROM MyTable TB1
WHERE TB1.pk = #recordKey;
FETCH NEXT
FROM JsonCursor
INTO #recordKey, #arrayIndex, #objKey, #objValue;
END;
CLOSE JsonCursor;
DEALLOCATE JsonCursor;
Do you need this?
CREATE TABLE #t (
I INT,
JsonColumn NVARCHAR(MAX) CHECK (ISJSON(JsonColumn) > 0)
);
INSERT INTO #t
VALUES (1, '[{"id":"101","name":"John"}, {"id":"102","name":"peter"}]');
INSERT INTO #t
VALUES (2, '[{"id":"103","name":"dave"}, {"id":"104","name":"mark"}]');
SELECT CONCAT('[', STRING_AGG(JSON_MODIFY(JSON_MODIFY('{}', '$.id', j.id), '$.name', 'John'), ','), ']')
FROM #t t
CROSS APPLY OPENJSON(JsonColumn) WITH (id INT, name sysname) j
GROUP BY t.I

Sql table comma separated values contain any of variable values checking

I have a variable #a='1,2,3,4' and a table that contain a column B that contain comma separated values.
How can I check that column B values contain any of the #a variable values?
You need to implement a function for splitting the values. There are a lot of variations, you can use this:
CREATE FUNCTION [dbo].[fn_Analysis_ConvertCsvListToNVarCharTableWithOrder](#List nvarchar(max), #Delimiter nvarchar(10) = ',')
RETURNS #result TABLE
(
[Value] nvarchar(max),
[SortOrder] bigint NOT NULL
)
AS
BEGIN
IF #Delimiter is null
BEGIN
SET #Delimiter = ','
END
DECLARE #XML xml = N'<r><![CDATA[' + REPLACE(#List, #Delimiter, ']]></r><r><![CDATA[') + ']]></r>'
DECLARE #BufTable TABLE (Value nvarchar(max), SortOrder bigint NOT NULL IDENTITY(1, 1) PRIMARY KEY)
INSERT INTO #BufTable (Value)
SELECT Tbl.Col.value('.', 'nvarchar(max)')
FROM #xml.nodes('//r') Tbl(Col)
OPTION (OPTIMIZE FOR (#xml = NULL))
INSERT INTO #result (Value, SortOrder)
SELECT Value, SortOrder
FROM #BufTable
RETURN
END
Having such function, its pretty easy:
DECLARE #DataSource TABLE
(
[column] VARCHAR(1024)
);
DECLARE #column VARCHAR(1024) = '1,2,3,4';
INSERT INTO #DataSource ([column])
VALUES ('100,200,300')
,('100,1,500')
,('1,2,3,500')
,('200')
,('33,32,31,4,30');
SELECT DISTINCT [column]
FROM #DataSource
CROSS APPLY [dbo].[fn_Analysis_ConvertCsvListToNVarCharTableWithOrder] ([column], ',') DSV
INNER JOIN [dbo].[fn_Analysis_ConvertCsvListToNVarCharTableWithOrder] (#column, ',') FV
ON DSV.[Value] = FV.[Value];
Using CROSS APPLY we are splitting the values for each column. Then we are splitting the filtering values and performing INNER JOIN in order to match only the rows having a value contained in the filter value. After that, we need a DISTINCT because column value may contains many values from the filter.
A t-sql string "splitter" is what you need but I would NOT use the mTVF recommended above as it is extremely inefficient and will kill parallelism. An inline table valued function (iTVF) is what you want for splitting strings.
I would suggest using delimitedSplit8k or delimitedSplit8k_lead which will perform ~30-90 times faster; or STRING_SPLIT if you're on SQL 2016+ and only need the value which will be several hundred times faster. Note this performance test:
-- sample data
declare #rows int = 10000;
if object_id('tempdb..#strings') is not null drop table #strings;
select top (#rows)
someid = identity(int,1,1),
somestring = replace(right(left(cast(newid() as varchar(36)), 27),21),'-',',')
into #strings
from sys.all_columns a, sys.all_columns b;
-- Performance test
set nocount on;
print 'fn_Analysis_ConvertCsvListToNVarCharTableWithOrder'+char(10)+replicate('-',50);
go
declare #st datetime = getdate(), #item varchar(10);
select #item = [value]
from #strings t
cross apply dbo.fn_Analysis_ConvertCsvListToNVarCharTableWithOrder(t.somestring,',');
print datediff(ms,#st,getdate());
go 5
print 'delimitedSplit8K (serial)'+char(10)+replicate('-',50);
go
declare #st datetime = getdate(), #item varchar(10);
select #item = item
from #strings t
cross apply dbo.DelimitedSplit8K(t.somestring,',')
option (maxdop 1);
print datediff(ms,#st,getdate());
go 5
print 'delimitedSplit8K (parallel)'+char(10)+replicate('-',50);
go
declare #st datetime = getdate(), #item varchar(10);
select #item = item
from #strings t
cross apply dbo.DelimitedSplit8K(t.somestring,',')
option (recompile, querytraceon 8649);
print datediff(ms,#st,getdate());
go 5
Results
fn_Analysis_ConvertCsvListToNVarCharTableWithOrder
--------------------------------------------------
Beginning execution loop
4183
4274
4536
4294
4406
Batch execution completed 5 times.
delimitedSplit8K (serial)
--------------------------------------------------
Beginning execution loop
50
50
50
54
53
Batch execution completed 5 times.
delimitedSplit8K (parallel)
--------------------------------------------------
Beginning execution loop
133
134
133
140
136
Batch execution completed 5 times.
How you could use to solve your problem
declare #sometable table(someid int identity, someNbr tinyint);
insert #sometable values (1),(3),(6),(12),(7),(15),(19);
declare #searchstring varchar(1000) = '1,2,3,4,19';
select someid, someNbr
from #sometable t
cross apply dbo.DelimitedSplit8K(#searchstring,',') s
where t.someNbr = s.Item;
Results
someid someNbr
----------- -------
1 1
2 3
7 19

SQL: Split comma separated string list with a query?

Here is my table structure:
id PaymentCond
1 ZBE1, AP1, LST2, CC1
2 VB3, CC1, ZBE1
I need to split the column PaymentCond, and would love to do that with a simple sql query since I have no clue how to use functions and would love to keep it all simple.
Here is what I already found:
SELECT id,
Substring(PaymentConditions, 1, Charindex(',', PaymentConditions)-1) as COND_1,
Substring(PaymentConditions, Charindex(',', PaymentConditions)+1, LEN(ANGEBOT.STDTXT)) as COND_2
from Payment
WHERE id = '1'
But this only outputs
id COND_1 COND_2
1 ZBE1 AP1, LST2, CC1
Is there a way to split everything from PaymentConditions to COND_1, COND_2, COND_3 and so on?
Thanks in advance.
first create function to split values
create function [dbo].[udf_splitstring] (#tokens varchar(max),
#delimiter varchar(5))
returns #split table (
token varchar(200) not null )
as
begin
declare #list xml
select #list = cast('<a>'
+ replace(#tokens, #delimiter, '</a><a>')
+ '</a>' as xml)
insert into #split
(token)
select ltrim(t.value('.', 'varchar(200)')) as data
from #list.nodes('/a') as x(t)
return
end
CREATE TABLE #Table1
([id] int, [PaymentCond] varchar(20))
;
INSERT INTO #Table1
([id], [PaymentCond])
VALUES
(1, 'ZBE1, AP1, LST2, CC1'),
(2, 'VB3, CC1, ZBE1')
;
select id, token FROM #Table1 as t1
CROSS APPLY [dbo].UDF_SPLITSTRING([PaymentCond],',') as t2
output
id token
1 ZBE1
1 AP1
1 LST2
1 CC1
2 VB3
2 CC1
2 ZBE1
declare #SchoolYearList nvarchar(max)='2014,2015,2016'
declare #start int=1
declare #length int=4
create table #TempFY(SchoolYear int)
while #start<len(#SchoolYearList)
BEGIN
Insert into #TempFY
select SUBSTRING(#SchoolYearList,#start,#length)
set #start=#start+5
END
Select SchoolYear from #TempFY
There is a new table-valued function in SQL Server STRING_SPLIT:
DECLARE #tags NVARCHAR(400) = 'aaaa,bbb,,cc,d'
SELECT *
FROM STRING_SPLIT(#tags, ',')
You will get:
But be careful its availability in your DB: The STRING_SPLIT function is available only under compatibility level 130

What is the best way to join between two table which have coma seperated columns

Table1
ID Name Tags
----------------------------------
1 Customer1 Tag1,Tag5,Tag4
2 Customer2 Tag2,Tag6,Tag4,Tag11
3 Customer5 Tag6,Tag5,Tag10
and Table2
ID Name Tags
----------------------------------
1 Product1 Tag1,Tag10,Tag6
2 Product2 Tag2,Tag1,Tag5
3 Product5 Tag1,Tag2,Tag3
what is the best way to join Table1 and Table2 with Tags column?
It should look at the tags column which coma seperated on table 2 for each coma seperated tag on the tags column in the table 1
Note: Tables are not full-text indexed.
The best way is not to have comma separated values in a column. Just use normalized data and you won't have trouble with querying like this - each column is supposed to only have one value.
Without this, there's no way to use any indices, really. Even a full-text index behaves quite different from what you might thing, and they are inherently clunky to use - they're designed for searching for text, not meaningful data. In the end, you will not get much better than something like
where (Col like 'txt,%' or Col like '%,txt' or Col like '%,txt,%')
Using a xml column might be another alternative, though it's still quite a bit silly. It would allow you to treat the values as a collection at least, though.
I don't think there will ever be an easy and efficient solution to this. As Luaan pointed out, it is a very bad idea to store data like this : you lose most of the power of SQL when you squeeze what should be individual units of data into a single cell.
But you can manage this at the slight cost of creating two user-defined functions. First, use this brilliant recursive technique to split the strings into individual rows based on your delimiter :
CREATE FUNCTION dbo.TestSplit (#sep char(1), #s varchar(512))
RETURNS table
AS
RETURN (
WITH Pieces(pn, start, stop) AS (
SELECT 1, 1, CHARINDEX(#sep, #s)
UNION ALL
SELECT pn + 1, stop + 1, CHARINDEX(#sep, #s, stop + 1)
FROM Pieces
WHERE stop > 0
)
SELECT pn AS SplitIndex,
SUBSTRING(#s, start, CASE WHEN stop > 0 THEN stop-start ELSE 512 END) AS SplitPart
FROM Pieces
)
Then, make a function that takes two strings and counts the matches :
CREATE FUNCTION dbo.MatchTags (#a varchar(512), #b varchar(512))
RETURNS INT
AS
BEGIN
RETURN
(SELECT COUNT(*)
FROM dbo.TestSplit(',', #a) a
INNER JOIN dbo.TestSplit(',', #b) b
ON a.SplitPart = b.SplitPart)
END
And that's it, here is a test roll with table variables :
DECLARE #A TABLE (Name VARCHAR(20), Tags VARCHAR(100))
DECLARE #B TABLE (Name VARCHAR(20), Tags VARCHAR(100))
INSERT INTO #A ( Name, Tags )
VALUES
( 'Customer1','Tag1,Tag5,Tag4'),
( 'Customer2','Tag2,Tag6,Tag4,Tag11'),
( 'Customer5','Tag6,Tag5,Tag10')
INSERT INTO #B ( Name, Tags )
VALUES
( 'Product1','Tag1,Tag10,Tag6'),
( 'Product2','Tag2,Tag1,Tag5'),
( 'Product5','Tag1,Tag2,Tag3')
SELECT * FROM #A a
INNER JOIN #B b ON dbo.MatchTags(a.Tags, b.Tags) > 0
I developed a solution as follows:
CREATE TABLE [dbo].[Table1](
Id int not null,
Name nvarchar(250) not null,
Tag nvarchar(250) null,
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[Table2](
Id int not null,
Name nvarchar(250) not null,
Tag nvarchar(250) null,
) ON [PRIMARY]
GO
get sample data for Table1, it will insert 28000 records
INSERT INTO Table1
SELECT CustomerID,CompanyName, (FirstName + ',' + LastName)
FROM AdventureWorks.SalesLT.Customer
GO 3
sample data for Table2.. i need same tags for Table2
declare #tag1 nvarchar(50) = 'Donna,Carreras'
declare #tag2 nvarchar(50) = 'Johnny,Caprio'
get sample data for Table2, it will insert 9735 records
INSERT INTO Table2
SELECT ProductID,Name, (case when(right(ProductID,1)>=5) then #tag1 else #tag2 end)
FROM AdventureWorks.SalesLT.Product
GO 3
My Solution
create TABLE #dt (
Id int IDENTITY(1,1) PRIMARY KEY,
Tag nvarchar(250) NOT NULL
);
I've create temp table and i will fill with Distinct Tag-s in Table1
insert into #dt(Tag)
SELECT distinct Tag
FROM Table1
Now i need to vertical table for tags
create TABLE #Tags ( Tag nvarchar(250) NOT NULL );
Now i'am fill #Tags table with While, you can use Cursor but while is faster
declare #Rows int = 1
declare #Tag nvarchar(1024)
declare #Id int = 0
WHILE #Rows>0
BEGIN
Select Top 1 #Tag=Tag,#Id=Id from #dt where Id>#Id
set #Rows =##RowCount
if #Rows>0
begin
insert into #Tags(Tag) SELECT Data FROM dbo.StringToTable(#Tag, ',')
end
END
last step : join Table2 with #Tags
select distinct t.*
from Table2 t
inner join #Tags on (',' + t.Tag + ',') like ('%,' + #Tags.Tag + ',%')
Table rowcount= 28000 Table2 rowcount=9735 select is less than 2 second
I use this kind of solution with paths of trees. First put a comma at the very begin and at the very end of the string. Than you can call
Where col1 like '%,' || col2 || ',%'
Some database index the column also for the like(postgres do it partially), therefore is also efficient. I don't know sqlserver.

Call procedure for each row without using cursors and loops?

I need to apply a procedure on every record's NVARCHAR(MAX) field in a table. The procedure will receive a large string and split it into several shorter strings (less than 100 chars). The procedure will return a result set of smaller string. These strings will be inserted into a different table (each in its own row).
How can I apply this procedure in a set-based fashion to the whole table, so that I can insert the results into another table?
I've found some similar questions on SO, however they didn't need to use the INSERT INTO construct. This means UDF and TVF functions are off the table. EDIT: functions do not support DML statements. I wanted to use INSERT INTO inside the function.
Alternatively, is there a set-based way of using a stored procedure? SELECT sproc(Text) FROM Table didn't work.
I am not sure of your exact logic to split the string, but if possible you can make your split function an inline TVF (Heres one I made earlier):
CREATE FUNCTION dbo.Split(#StringToSplit NVARCHAR(MAX), #Delimiter NCHAR(1))
RETURNS TABLE
AS
RETURN
(
SELECT Position = Number,
Value = SUBSTRING(#StringToSplit, Number, CHARINDEX(#Delimiter, #StringToSplit + #Delimiter, Number) - Number)
FROM ( SELECT TOP (LEN(#StringToSplit) + 1) Number = ROW_NUMBER() OVER(ORDER BY a.object_id)
FROM sys.all_objects a
) n
WHERE SUBSTRING(#Delimiter + #StringToSplit + #Delimiter, n.Number, 1) = #Delimiter
);
Then you can simply use this in your insert statement by using cross apply with the TVF:
DECLARE #T1 TABLE (ID INT IDENTITY, TextToSplit NVARCHAR(MAX) NOT NULL);
DECLARE #T2 TABLE (T1ID INT NOT NULL, Position INT NOT NULL, SplitText NVARCHAR(MAX) NOT NULL);
INSERT #T1 (TextToSplit)
VALUES ('This is a test'), ('This is Another Test');
INSERT #T2 (T1ID, Position, SplitText)
SELECT t1.ID, s.Position, s.Value
FROM #T1 t1
CROSS APPLY dbo.Split(t1.TextToSplit, N' ') s;
SELECT *
FROM #T2;