In SQL Server, I have a stored procedure that takes a JSON parameter #ChangeSet as below.
DECLARE #ChangeSet varchar(MAX) =
'{
"Acts":
[
{"ActId":100,"ActText":"Intro","ActNumber":1},
{"ActId":0, "ActText":"Beginning","ActNumber":2},
{"ActId":0, "ActText":"Middle","ActNumber":3},
{"ActId":0, "ActText":"End","ActNumber":4},
]
}';
Within the proc, I have a MERGE statement that updates tables based on whether it is an INSERT (if ActId is 0) or an UPDATE. I would like to update the JSON #ChangeSet variable with the multiple PK ActId's returned from the INSERTED table from the MERGE so that I can return it in an OUT parameter.
ActId Type Action Value ActNumber
---------------------------------------------
100 Act UPDATE Intro 1
101 Act INSERT Beginning 2
102 Act INSERT Middle 3
103 Act INSERT End 4
I could re-query the database, outputting as JSON but am interested in a figuring out a technique directly updating the JSON using something like JSON_MODIFY, etc. if possible.
I looked at various samples but have not found anything similar. Anybody have any good examples?
If I understand the question correctly, you have two options:
Modify the Acts JSON array using JSON_MODIFY (but you need SQL Server 2017+ to use a variable as a path expression). This approach sets a local variable in a SELECT statement, so yo must not use ORDER BY or DISTINCT in the statement.
Parse the input JSON, use a set-based approach to get the expected results as a table and output the table content as JSON using FOR JSON AUTO
JSON:
DECLARE #ChangeSet varchar(MAX) =
'{
"Acts":
[
{"ActId":100,"ActText":"Intro","ActNumber":1},
{"ActId":0, "ActText":"Beginning","ActNumber":2},
{"ActId":0, "ActText":"Middle","ActNumber":3},
{"ActId":0, "ActText":"End","ActNumber":4}
]
}';
Statement with JSON_MODIFY:
SELECT #ChangeSet = JSON_MODIFY(
#ChangeSet,
CONCAT('$.Acts[', j1.[key], '].ActId'),
v.[Id]
)
FROM OPENJSON(#ChangeSet, '$.Acts') j1
CROSS APPLY OPENJSON(j1.[value]) WITH (ActNumber int '$.ActNumber') j2
JOIN (VALUES
(100, 'Act', 'UPDATE', 'Intro', 1),
(101, 'Act', 'INSERT', 'Beginning', 2),
(102, 'Act', 'INSERT', 'Middle', 3),
(103, 'Act', 'INSERT', 'End', 4)
) v ([Id], [Type], [Action], [Value], [ActNumber]) ON v.[ActNumber] = j2.[ActNumber]
Statement with FOR JSON:
SELECT #ChangeSet = (
SELECT v.[Id] AS ActId, j.ActText, j.ActNumber
FROM OPENJSON(#ChangeSet, '$.Acts') WITH (
ActId int '$.ActId',
ActText varchar(50) '$.ActText',
ActNumber int '$.ActNumber'
) j
JOIN (VALUES
(100, 'Act', 'UPDATE', 'Intro', 1),
(101, 'Act', 'INSERT', 'Beginning', 2),
(102, 'Act', 'INSERT', 'Middle', 3),
(103, 'Act', 'INSERT', 'End', 4)
) v ([Id], [Type], [Action], [Value], [ActNumber]) ON v.[ActNumber] = j.[ActNumber]
FOR JSON AUTO, ROOT ('Acts')
)
Result:
{
"Acts":
[
{"ActId":100, "ActText":"Intro", "ActNumber":1},
{"ActId":101, "ActText":"Beginning", "ActNumber":2},
{"ActId":102, "ActText":"Middle", "ActNumber":3},
{"ActId":103, "ActText":"End", "ActNumber":4}
]
}
For completeness with the question, here is the finished routine that takes the Output of a Merge statement, inserts it into a temp table variable, then updates the input JSON with the newly inserted ActId primary keys so that it can then returned by a procedure OUT variable.
-- SQL 2017+ REQUIRED
DECLARE #ActActions table( [ActId] int, [Action] varchar(30),
[Value] nvarchar(max), [ActNumber] int );
---------------------------------------------------------------------------------
OUTPUT COALESCE (INSERTED.ActId, DELETED.ActId), $action,
COALESCE (INSERTED.ActText, DELETED.ActText),
COALESCE (INSERTED.ActNumber, DELETED.ActNumber)
INTO #ActActions; -- Required semi-colon at end of MERGE
---------------------------------------------------------------------------------
SELECT #ChangeSetJson = JSON_QUERY(JSON_MODIFY(
#ChangeSetJson, '$.Acts[' + j1.[key] + '].ActId', a.[ActId] ) )
FROM OPENJSON(#ChangeSetJson, '$.Acts') j1 CROSS APPLY
OPENJSON(j1.[value])
WITH (ActNumber int '$.ActNumber') j2 INNER JOIN
#ActActions a
ON a.[ActNumber] = j2.[ActNumber]
WHERE a.[Action] = 'INSERT'
Related
I try to find a way to let the SGBD perform a population of merge fields within a long text.
Create the structure :
CREATE TABLE [dbo].[store]
(
[id] [int] NOT NULL,
[text] [nvarchar](MAX) NOT NULL
)
CREATE TABLE [dbo].[statement]
(
[id] [int] NOT NULL,
[store_id] [int] NOT NULL
)
CREATE TABLE [dbo].[statement_merges]
(
[statement_id] [int] NOT NULL,
[merge_field] [nvarchar](30) NOT NULL,
[user_data] [nvarchar](MAX) NOT NULL
)
Now, create test values
INSERT INTO [store] (id, text)
VALUES (1, 'Waw, stackoverflow is an amazing library of lost people in the IT hell, and i have the feeling that $$PERC_SAT$$ of the users found a solution, personally I asked $$ASKED$$ questions.')
INSERT INTO [statement] (id, store_id)
VALUES (1, 1)
INSERT INTO [statement_merges] (statement_id, merge_field, user_data)
VALUES (1, '$$PERC_SAT$$', '85%')
INSERT INTO [statement_merges] (statement_id, merge_field, user_data)
VALUES (1, '$$ASKED$$', '12')
At the time being my app is delivering the final statement, looping through merges, replacing in the stored text and output
Waw, stackoverflow is an amazing library of lost people in the IT
hell, and i have the feeling that 85% of the users found a solution,
personally I asked 12 questions.
I try to find a way to be code-independent and serve the output in a single query, as u understood, select a statement in which the stored text have been populated with user data. I hope I'm clear.
I looked on TRANSLATE function but it looks like a char replacement, so I have two choices :
I try a recursive function, replacing one by one until no merge_fields is found in the calculated text; but I have doubts about the performance of this approach;
There is a magic to do that but I need your knowledge...
Consider that I want this because the real texts are very long, and I don't want to store it more than once in my database. You can imagine a 3 pages contract with only 12 parameters, like start date, invoiced amount, etc... Everything else cant be changed for compliance.
Thank you for your time!
EDIT :
Thanks to Randy's help, this looks to do the trick :
WITH cte_replace_tokens AS (
SELECT replace(r.text, m.merge_field, m.user_data) as [final], m.merge_field, s.id, 1 AS i
FROM store r
INNER JOIN statement s ON s.store_id = r.id
INNER JOIN statement_merges m ON m.statement_id = s.id
WHERE m.statement_id = 1
UNION ALL
SELECT replace(r.final, m.merge_field, m.user_data) as [final], m.merge_field, r.id, r.i + 1 AS i
FROM cte_replace_tokens r
INNER JOIN statement_merges m ON m.statement_id = r.id
WHERE m.merge_field > r.merge_field
)
select TOP 1 final from cte_replace_tokens ORDER BY i DESC
I will check with a bigger database if the performance is good...
At least, I can "populate" one statement, I need to figure out to be able to extract a list as well.
Thanks again !
If a record is updated more than once by the same update, the last wins. None of the updates are affected by the others - no cumulative effect. It is possible to trick SQL using a local variable to get cumulative effects in some cases, but it's tricky and not recommended. (Order becomes important and is not reliable in an update.)
One alternate is recursion in a CTE. Generate a new record from the prior as each token is replaced until there are no tokens. Here is a working example that replaces 1 with A, 2 with B, etc. (I wonder if there is some tricky xml that can do this as well.)
if not object_id('tempdb..#Raw') is null drop table #Raw
CREATE TABLE #Raw(
[test] [varchar](100) NOT NULL PRIMARY KEY CLUSTERED,
)
if not object_id('tempdb..#Token') is null drop table #Token
CREATE TABLE #Token(
[id] [int] NOT NULL PRIMARY KEY CLUSTERED,
[token] [char](1) NOT NULL,
[value] [char](1) NOT NULL,
)
insert into #Raw values('123456'), ('1122334456')
insert into #Token values(1, '1', 'A'), (2, '2', 'B'), (3, '3', 'C'), (4, '4', 'D'), (5, '5', 'E'), (6, '6', 'F');
WITH cte_replace_tokens AS (
SELECT r.test, replace(r.test, l.token, l.value) as [final], l.id
FROM [Raw] r
CROSS JOIN #Token l
WHERE l.id = 1
UNION ALL
SELECT r.test, replace(r.final, l.token, l.value) as [final], l.id
FROM cte_replace_tokens r
CROSS JOIN #Token l
WHERE l.id = r.id + 1
)
select * from cte_replace_tokens where id = 6
It's not recommended to do such tasks inside sql engine but if you want to do that, you need to do it in a loop using cursor in a function or stored procedure like so :
DECLARE #merge_field nvarchar(30)
, #user_data nvarchar(MAX)
, #statementid INT = 1
, #text varchar(MAX) = 'Waw, stackoverflow is an amazing library of lost people in the IT hell, and i have the feeling that $$PERC_SAT$$ of the users found a solution, personally I asked $$ASKED$$ questions.'
DECLARE merge_statements CURSOR FAST_FORWARD
FOR SELECT
sm.merge_field
, sm.user_data
FROM dbo.statement_merges AS sm
WHERE sm.statement_id = #statementid
OPEN merge_statements
FETCH NEXT FROM merge_statements
INTO #merge_field , #user_data
WHILE ##FETCH_STATUS = 0
BEGIN
set #text = REPLACE(#text , #merge_field, #user_data )
FETCH NEXT FROM merge_statements
INTO #merge_field , #user_data
END
CLOSE merge_statements
DEALLOCATE merge_statements
SELECT #text
Here is a recursive solution.
SQL Fiddle
MS SQL Server 2017 Schema Setup:
CREATE TABLE [dbo].[store]
(
[id] [int] NOT NULL,
[text] [nvarchar](MAX) NOT NULL
)
CREATE TABLE [dbo].[statement]
(
[id] [int] NOT NULL,
[store_id] [int] NOT NULL
)
CREATE TABLE [dbo].[statement_merges]
(
[statement_id] [int] NOT NULL,
[merge_field] [nvarchar](30) NOT NULL,
[user_data] [nvarchar](MAX) NOT NULL
)
INSERT INTO store (id, text)
VALUES (1, '$$(*)$$, stackoverflow...$$PERC_SAT$$...$$ASKED$$ questions.')
INSERT INTO store (id, text)
VALUES (2, 'Use The #_#')
INSERT INTO statement (id, store_id) VALUES (1, 1)
INSERT INTO statement (id, store_id) VALUES (2, 2)
INSERT INTO statement_merges (statement_id, merge_field, user_data) VALUES (1, '$$PERC_SAT$$', '85%')
INSERT INTO statement_merges (statement_id, merge_field, user_data) VALUES (1, '$$ASKED$$', '12')
INSERT INTO statement_merges (statement_id, merge_field, user_data) VALUES (1, '$$(*)$$', 'Wow')
INSERT INTO statement_merges (statement_id, merge_field, user_data) VALUES (2, ' #_#', 'Flux!')
Query 1:
;WITH Normalized AS
(
SELECT
store_id=store.id,
store.text,
sm.merge_field,
sm.user_data,
RowNumber = ROW_NUMBER() OVER(PARTITION BY store.id,sm.statement_id ORDER BY merge_field),
statement_id = st.id
FROM
store store
INNER JOIN statement st ON st.store_id = store.id
INNER JOIN statement_merges sm ON sm.statement_id = st.id
)
, Recurse AS
(
SELECT
store_id, statement_id, old_text = text, merge_field,user_data, RowNumber,
Iteration=1,
new_text = REPLACE(text, merge_field, user_data)
FROM
Normalized
WHERE
RowNumber=1
UNION ALL
SELECT
n.store_id, n.statement_id, r.old_text, n.merge_field, n.user_data,
RowNumber=r.RowNumber+1,
Iteration=Iteration+1,
new_text = REPLACE(r.new_text, n.merge_field, n.user_data)
FROM
Normalized n
INNER JOIN Recurse r ON r.RowNumber = n.RowNumber AND r.statement_id = n.statement_id
)
,ReverseOnIteration AS
(
SELECT *,
ReverseIteration = ROW_NUMBER() OVER(PARTITION BY statement_id ORDER BY Iteration DESC)
FROM
Recurse
)
SELECT
store_id, statement_id, new_text, old_text
FROM
ReverseOnIteration
WHERE
ReverseIteration=1
Results:
| store_id | statement_id | new_text | old_text |
|----------|--------------|------------------------------------------|--------------------------------------------------------------|
| 1 | 1 | Wow, stackoverflow...85%...12 questions. | $$(*)$$, stackoverflow...$$PERC_SAT$$...$$ASKED$$ questions. |
| 2 | 2 | Use TheFlux! | Use The #_# |
With the help of Randy, I think I've achieved what I wanted to do !
Known the fact that my real case is a contract, in which there are several statements that may be :
free text
stored text without any merges
stored text with one or
several merges
this CTE does the job !
WITH cte_replace_tokens AS (
-- The initial query dont join on merges neither on store because can be a free text
SELECT COALESCE(r.text, s.part_text) AS [final], CAST('' AS NVARCHAR) AS merge_field, s.id, 1 AS i, s.contract_id
FROM statement s
LEFT JOIN store r ON s.store_id = r.id
UNION ALL
-- We loop till the last merge field, output contains iteration to be able to keep the last record ( all fields updated )
SELECT replace(r.final, m.merge_field, m.user_data) as [final], m.merge_field, r.id, r.i + 1 AS i, r.contract_id
FROM cte_replace_tokens r
INNER JOIN statement_merges m ON m.statement_id = r.id
WHERE m.merge_field > r.merge_field AND r.final LIKE '%' + m.merge_field + '%'
-- spare lost replacements by forcing only one merge_field per loop
AND NOT EXISTS( SELECT mm.statement_id FROM statement_merges mm WHERE mm.statement_id = m.statement_id AND mm.merge_field > r.merge_field AND mm.merge_field < m.merge_field)
)
select s.id,
(select top 1 final from cte_replace_tokens t WHERE t.contract_id = s.contract_id AND t.id = s.id ORDER BY i DESC) as res
FROM statement s
where contract_id = 1
If the CTE solution with a cross join is too slow, an alternate solution would be to build a scalar fn dynamically that has every REPLACE required from the token table. One scalar fn call per record then is order(N). I get the same result as before.
The function is simple and likely not to be too long, depending upon how big the token table becomes...256 MB batch limit. I've seen attempts to dynamically create queries to improve performance backfire - moved the problem to compile time. Should not be a problem here.
if not object_id('tempdb..#Raw') is null drop table #Raw
CREATE TABLE #Raw(
[test] [varchar](100) NOT NULL PRIMARY KEY CLUSTERED,
)
if not object_id('tempdb..#Token') is null drop table #Token
CREATE TABLE #Token(
[id] [int] NOT NULL PRIMARY KEY CLUSTERED,
[token] [char](1) NOT NULL,
[value] [char](1) NOT NULL,
)
insert into #Raw values('123456'), ('1122334456')
insert into #Token values(1, '1', 'A'), (2, '2', 'B'), (3, '3', 'C'), (4, '4', 'D'), (5, '5', 'E'), (6, '6', 'F');
DECLARE #sql varchar(max) = 'CREATE FUNCTION dbo.fn_ReplaceTokens(#raw varchar(8000)) RETURNS varchar(8000) AS BEGIN RETURN ';
WITH cte_replace_statement AS (
SELECT a.id, CAST('replace(#raw,''' + a.token + ''',''' + a.value + ''')' as varchar(max)) as [statement]
FROM #Token a
WHERE a.id = 1
UNION ALL
SELECT n.id, CAST(replace(l.[statement], '#raw', 'replace(#raw,''' + n.token + ''',''' + n.value + ''')') as varchar(max)) as [statement]
FROM #Token n
INNER JOIN cte_replace_statement l
ON n.id = l.id + 1
)
select #sql += [statement] + ' END' from cte_replace_statement where id = 6
print #sql
if not object_id('dbo.fn_ReplaceTokens') is null drop function dbo.fn_ReplaceTokens
execute (#sql)
SELECT r.test, dbo.fn_ReplaceTokens(r.test) as [final] FROM [Raw] r
I have this result set in SQL server:
ID CUSTOMER PRODUCT DATE COUNT
A1 Walmart Widget 1/1/2020 5
B2 Amazon Thingy 1/2/2020 10
C3 Target Gadget 2/1/2020 7
I want to output it as json, which SQL server 2016+ has plenty ability to do. But I want a traditional string-indexed list ('dictionary') indexed by the id, like so:
Goal
{
"A1": {"Customer":"Walmart", "Product":"Widget", "Date":"1/1/2020", "Count":5 },
"B2": {"Customer":"Amazon", "Product":"Thingy", "Date":"1/2/2020", "Count":10},
"C3": {"Customer":"Target", "Product":"Gadget", "Date":"2/1/2020", "Count":7 }
}
However, typical select * from table for json path outputs as an unindexed array of objects:
Current State
[
{"Id":"A1", "Customer":"Walmart", "Product":"Widget", "Date":"1/1/2020", "Count":5 },
{"Id":"B2", "Customer":"Amazon", "Product":"Thingy", "Date":"1/2/2020", "Count":10},
{"Id":"C3", "Customer":"Target", "Product":"Gadget", "Date":"2/1/2020", "Count":7 }
]
The other for json modifiers such as root seem superficially relevant, but as far as I can tell just does glorified string concatenation of capturing the entire object in an outer root node.
How can the above notation be done using native (performant) SQL server json functions?
I don't think that you can generate JSON output with variable key names using FOR JSON AUTO or FOR JSON PATH, but if you can upgrade to SQL Server 2017, the following approach, that uses only JSON built-in support, is a possible option:
Table:
CREATE TABLE Data (
Id varchar(2),
Customer varchar(50),
Product varchar(50),
[Date] date,
[Count] int
)
INSERT INTO Data
(Id, Customer, Product, [Date], [Count])
VALUES
('A1', 'Walmart', 'Widget', '20200101', 5),
('B2', 'Amazon', 'Thingy', '20200102', 10),
('C3', 'Target', 'Gadget', '20200201', 7)
Statement:
DECLARE #json nvarchar(max) = N'{}'
SELECT #json = JSON_MODIFY(
#json,
CONCAT(N'$."', ID, N'"'),
JSON_QUERY((SELECT Customer, Product, [Date], [Count] FOR JSON PATH, WITHOUT_ARRAY_WRAPPER))
)
FROM Data
SELECT #json
Result:
{"A1":{"Customer":"Walmart","Product":"Widget","Date":"2020-01-01","Count":5},"B2":{"Customer":"Amazon","Product":"Thingy","Date":"2020-01-02","Count":10},"C3":{"Customer":"Target","Product":"Gadget","Date":"2020-02-01","Count":7}}
Notes:
Using a variable or expression instead of value for path parameter in JSON_MODIFY() is available in SQL Server 2017+. JSON_QUERY() is used to prevent the escaping of the special characters.
The question is tagged sql2016, string_agg() won't work ... (aggregate with xpath or custom aggregate)
declare #t table
(
Id varchar(10),
CUSTOMER varchar(50),
PRODUCT varchar(50),
[DATE] date,
[COUNT] int
);
insert into #t(Id, CUSTOMER, PRODUCT, [DATE], [COUNT])
values
('A1','Walmart','Widget','20200101', 5),
('B2','Amazon','Thingy','20200201', 10),
('C3','Target','Gadget','20200102', 7);
select concat('{', STRING_AGG(thejson, ','), '}')
from
(
select concat('"', STRING_ESCAPE(Id, 'json'), '":', (select CUSTOMER, PRODUCT, DATE, COUNT for json path, without_array_wrapper )) as thejson
from #t
) as src;
Unfortunately, you want a JSON result that has multiple values -- A1, B2, and C3 -- derived from the data. This means that you need to aggregate the data into one row. Normally, for json path would want to create an array of values, one for each row.
So, this should do what you want:
select json_query(max(case when id = 'A1' then j.p end)) as A1,
json_query(max(case when id = 'B2' then j.p end)) as B2,
json_query(max(case when id = 'B3' then j.p end)) as B3
from t cross apply
(select t.customer, t.product, t.date, t.count
for json path
) j(p)
for json path;
Here is a db<>fiddle.
However, it is not easily generalizable. For a general solution, you might need to do string manipulations.
I am trying to optimize a paging query for my query with total count of records in a stored procedure. Please give some optimized paging query to fetch 25 records per page from millions of records.
DDL Commands
create table pdf_details
(
prodid nvarchar(100),
prodname nvarchar(100),
lang nvarchar(100),
fmt nvarchar(5),
type varchar(2)
constraint pk_pdf Primary Key (proid, lang, fmt)
)
create table html_details
(
prodid nvarchar(100),
prodname nvarchar(100),
lang nvarchar(100),
fmt nvarchar(5),
type varchar(2)
constraint pk_html Primary Key(prodid, lang, fmt)
)
create index ix_pdf_details on pdf_details(prodname)
Sample records
insert into pdf_details
values ('A100', 'X', 'EN', 'HM', 'PDF'),
('A100', 'X', 'JP', 'GM', 'PDF'),
('A100', 'X', 'EN', 'HM', 'PDF'),
('B101', 'Y', 'EN', 'HM', 'PDF');
insert into html_details
values ('B100', 'X', 'EN', 'HM', 'HTML')
('B100', 'X', 'JP', 'GM', 'HTML')
('B100', 'X', 'EN', 'HM', 'HTML')
('C101', 'Y', 'EN', 'GH', 'HTML')
In reality, these tables contain millions of rows.
Original query
SELECT DISTINCT
TP.PRODID AS ID,
TP.PRODNAME AS NAME,
TP.LANG AS LANG,
TP.FMT,
TP.TYPE
FROM
PDF_DETAILS TP
WHERE
TP.PRODID = #PRODID
AND (#PRODUCTNAME IS NULL OR
REPLACE(REPLACE(REPLACE(REPLACE(TP.PRODNAME, '™', '|TM'), '®', '|TS'), '©', '|CP'), '°', '|DEG')
LIKE REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(#PRODNAME, '[', '\['), '_', '\_'), '™', '|TM'), '®', '|TS'), '©', '|CP'), '°', '|DEG') ESCAPE '\'
UNION ALL
SELECT DISTINCT
TP.PRODID AS ID,
TP.PRODNAME AS NAME,
TP.LANG AS LANG,
TP.FMT,
TP.TYPE
FROM
HTML_DETAILS TP
WHERE
TP.PRODID = #PRODID
AND (#PRODUCTNAME IS NULL OR
REPLACE(REPLACE(REPLACE(REPLACE(TP.PRODNAME,'™','|TM'),'®','|TS'),'©','|CP'),'°','|DEG')
LIKE REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(REPLACE(#PRODNAME,'[','\['),'_','\_'),'™','|TM'),'®','|TS'),'©','|CP'),'°','|DEG') ESCAPE '\'
As of SQL Server 2012, you can use the OFFSET ... FETCH approach to paging - you Google for it, there are TONS of great articles about it.
Basically, you have to do something like this:
SELECT (list-of-columns)
FROM YourTable
(optionally add JOINs here)
WHERE (conditions)
ORDER BY (some column)
OFFSET n ROWS
FETCH NEXT y ROWS ONLY
Basically, you must have an ORDER BY (since offsetting / skipping only makes sense when you know what your data is ordered by), and then you can define with the OFFSET clause (with a fixed number of a SQL Server variable #offset) how many rows (in that defined ordering) to skip, and the FETCH NEXT clause (again with a fixed number of a SQL Server variable #numrows) defines how many rows will be returned.
Imagine I have something like the following
SELECT 0 AS 'Key','No' AS 'Value'
UNION
SELECT 1 AS 'Key','YES' AS 'Value'
UNION
SELECT 2 AS 'Key','Maybe' AS 'Value'
....
....
How can I make above statement more readable so I can accommodate more constant key/value pair in above list in a single select statement? I don't want to create table variable or create a complex sql statement. Just a single select statement returning bunch of constant key/pair values.
You can use VALUES:
SELECT *
FROM (VALUES
(0, 'No'),
(1, 'Yes'),
(2, 'Maybe')
) t([Key], Value)
Table Value Constructor
Using a table value constructor.
VALUES ((0,'NO'),(1,'YES'),(2,'MAYBE'))
Understand you don't want to create a table variable
I use the accepted answer a lot +1
Just pointing out a table variable lets you declare type and primary key
declare #tbl table ([key] tinyint primary key, [value] varchar(12));
insert into #tbl values (1, 'one')
, (2, 'two')
, (3, 'three');
select * from #tbl order by [key];
I have a table
CREATE TABLE [StudentsByKindergarten]
(
[FK_KindergartenId] [int] IDENTITY(1,1) NOT NULL,
[StudentList] [nvarchar]
)
where the entries are
(1, "John, Alex, Sarah")
(2, "")
(3, "Jonny")
(4, "John, Alex")
I want to migrate this information to the following table.
CREATE TABLE [KindergartenStudents]
(
[FK_KindergartenId] [int] NOT NULL,
[StudentName] [nvarchar] NOT NULL)
)
so that it will have
(1, "John")
(1, "Alex")
(1, "Sarah")
(3, "Jonny")
(4, "John")
(4, "Alex")
I think I can achieve split function using something like the answer here: How do I split a string so I can access item x?
Using the function here:
http://www.codeproject.com/Articles/7938/SQL-User-Defined-Function-to-Parse-a-Delimited-Str
I can do something like this,
INSERT INTO [KindergartenStudents] ([FK_KindergartenId], [Studentname])
SELECT
sbk.FK_KindergartenId,
parsed.txt_value
FROM
[StudentsByKindergarten] sbk, dbo.fn_ParseText2Table(sbk.StudentList,',') parsed
GO
but doesn't seem to work.
Based on this question, I've learned a better approach for this problem. You just need to use CROSS APPLY with your suggested function fn_ParseText2Table.
Sample Fiddle
INSERT INTO KindergartenStudents
(FK_KindergartenId, StudentName)
SELECT
sbk.FK_KindergartenId,
parsed.txt_value
FROM
StudentsByKindergarten sbk
CROSS APPLY
fn_ParseText2Table(sbk.StudentList, ',') parsed
I've used the function that you suggested (fn_ParseText2Table) and the following T-SQL is working. You can test it with this fiddle: link.
BEGIN
DECLARE
#ID int,
#iterations int
-- Iterate the number of not empty rows
SET #iterations =
(SELECT
COUNT(*)
FROM
StudentsByKindergarten
WHERE
DATALENGTH(StudentList) > 0
)
WHILE ( #iterations > 0 )
BEGIN
-- Select the ID of row_number() = #iteration
SET #ID =
(SELECT
FK_KindergartenId
FROM
(SELECT
*,
ROW_NUMBER() OVER (ORDER BY FK_KindergartenId DESC) as rn
FROM
StudentsByKindergarten
WHERE
DATALENGTH(StudentList) > 0) rows
WHERE
rows.rn = #iterations
)
SET #iterations -= 1
-- Insert the parsed values
INSERT INTO KindergartenStudents
(FK_KindergartenId, StudentName)
SELECT
#ID,
parsed.txt_value
FROM
fn_ParseText2Table
(
(SELECT
StudentList
FROM
StudentsByKindergarten
WHERE
FK_KindergartenId = #ID),
',') parsed
END
END