Print bullet before each sentence + new line after each sentence SQL - sql

I have a text like: Sentence one. Sentence two. Sentence three.
I want it to be:
Sentence one.
Sentence two.
Sentence three.
I assume I can replace '.' with '.' + char(10) + char(13), but how can I go about bullets? '•' character works fine if printed manually I just do not know how to bullet every sentence including the first.

-- Initial string
declare #text varchar(100)
set #text = 'Sentence one. Sentence two. Sentence three.'
-- Setting up replacement text - new lines (assuming this works) and bullets ( char(149) )
declare #replacement varchar(100)
set #replacement = '.' + char(10) + char(13) + char(149)
-- Adding a bullet at the beginning and doing the replacement, but this will also add a trailing bullet
declare #processedText varchar(100)
set #processedText = char(149) + ' ' + replace(#text, '.', #replacement)
-- Figure out length of substring to select in the next step
declare #substringLength int
set #substringLength = LEN(#processedText) - CHARINDEX(char(149), REVERSE(#processedText))
-- Removes trailing bullet
select substring(#processedText, 0, #substringLength)
I've tested here - https://data.stackexchange.com/stackoverflow/qt/119364/
I should point out that doing this in T-SQL doesn't seem correct. T-SQL is meant to process data; any presentation-specific work should be done in the code that calls this T-SQL (C# or whatever you're using).

Here's my over-the-top approach but I feel it's a fairly solid approach. It combines classic SQL problem solving techniques of Number tables for string slitting and use of the FOR XML for concatenating the split lines back together. The code is long but the only place you'd need to actually edit is the SOURCE_DATA section.
No knock on #Jeremy Wiggins approach, but I prefer mine as it lends itself well to a set based approach in addition to being fairly efficient code.
-- This code will rip lines apart based on #delimiter
-- and put them back together based on #rebind
DECLARE
#delimiter char(1)
, #rebind varchar(10);
SELECT
#delimiter = '.'
, #rebind = char(10) + char(149) + ' ';
;
-- L0 to L5 simulate a numbers table
-- http://billfellows.blogspot.com/2009/11/fast-number-generator.html
WITH L0 AS
(
SELECT
0 AS C
UNION ALL
SELECT
0
)
, L1 AS
(
SELECT
0 AS c
FROM
L0 AS A
CROSS JOIN L0 AS B
)
, L2 AS
(
SELECT
0 AS c
FROM
L1 AS A
CROSS JOIN L1 AS B
)
, L3 AS
(
SELECT
0 AS c
FROM
L2 AS A
CROSS JOIN L2 AS B
)
, L4 AS
(
SELECT
0 AS c
FROM
L3 AS A
CROSS JOIN L3 AS B
)
, L5 AS
(
SELECT
0 AS c
FROM
L4 AS A
CROSS JOIN L4 AS B
)
, NUMS AS
(
SELECT
ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS number
FROM
L5
)
, SOURCE_DATA (ID, content) AS
(
-- This query simulates your input data
SELECT 1, 'Sentence one. Sentence two. Sentence three.'
UNION ALL SELECT 7, 'In seed time learn, in harvest teach, in winter enjoy.Drive your cart and your plow over the bones of the dead.The road of excess leads to the palace of wisdom.Prudence is a rich, ugly old maid courted by Incapacity.He who desires but acts not, breeds pestilence.'
)
, MAX_LENGTH AS
(
-- this query is rather important. The current NUMS query generates a
-- very large set of numbers but we only need 1 to maximum lenth of our
-- source data. We can take advantage of a 2008 feature of letting
-- TOP take a dynamic value
SELECT TOP (SELECT MAX(LEN(SD.content)) AS max_length FROM SOURCE_DATA SD)
N.number
FROM
NUMS N
)
, MULTI_LINES AS
(
-- This query will make many lines out a single line based on the supplied delimiter
-- Need to retain the ID (or some unique value from original data to regroup it
-- http://www.sommarskog.se/arrays-in-sql-2005.html#tblnum
SELECT
SD.ID
, LTRIM(substring(SD.content, Number, charindex(#delimiter, SD.content + #delimiter, Number) - Number)) + #delimiter AS lines
FROM
MAX_LENGTH
CROSS APPLY
SOURCE_DATA SD
WHERE
Number <= len(SD.content)
AND substring(#delimiter + SD.content, Number, 1) = #delimiter
)
, RECONSITITUE (content, ID) AS
(
-- use classic concatenation to put it all back together
-- using CR/LF * (space) as delimiter
-- as a correlated sub query and joined back to our original table to preserve IDs
-- https://stackoverflow.com/questions/5196371/sql-query-concatenating-results-into-one-string
SELECT DISTINCT
STUFF
(
(
SELECT #rebind + M.lines
FROM MULTI_LINES M
WHERE M.ID = ML.ID
FOR XML PATH('')
)
, 1
, 1
, '')
, ML.ID
FROM
MULTI_LINES ML
)
SELECT
R.content
, R.ID
FROM
RECONSITITUE R
Results
content ID
----------------------------------------------------------- ---
• In seed time learn, in harvest teach, in winter enjoy.
• Drive your cart and your plow over the bones of the dead.
• The road of excess leads to the palace of wisdom.
• Prudence is a rich, ugly old maid courted by Incapacity.
• He who desires but acts not, breeds pestilence. 7
• Sentence one.
• Sentence two.
• Sentence three. 1
(2 row(s) affected)
References
Number table
Splitting strings via number table
SQL Query - Concatenating Results into One String

select '• On '+ cast(getdate() as varchar)+' I discovered how to do this '
Sample

Related

SQL Server Recursive CTE not returning expected rows

I'm building a Markov chain name generator. I'm trying to replace a while loop with a recursive CTE. Limitations in using top and order by in the recursive part of the CTE have led me down the following path.
The point of all of this is to generate names, based on a model, which is just another word that I've chunked out into three character segments, stored in three columns in the Markov_Model table. The next character in the sequence will be a character from the Markov_Model, such that the 1st and 2nd characters in the model match the penultimate and ultimate character in the word being generated. Rather than generate a probability matrix for the that third character, I'm using a scalar function that finds all the characters that fit the criteria, and gets one of them randomly: order by newid().
The problem is that this formulation of the CTE gets the desired number of rows in the anchor segment, but the union that recursively calls the CTE only unions one row from the anchor. I've attached a sample of the desired output at the bottom.
The query:
;with names as
(
select top 5
cast('+' as nvarchar(50)) as char1,
cast('+' as nvarchar(50)) as char2,
cast(char3 as nvarchar(50)) as char3,
cast('++' + char3 as nvarchar(100)) as name_in_progress,
1 as iteration
from markov_Model
where char1 is null
and char2 is null
order by newid() -- Get some random starting characters
union all
select
n.char2 as char1,
n.char3 as char2,
cast(fnc.addition as nvarchar(50)) as char3,
cast(n.name_in_progress + fnc.addition as nvarchar(100)),
1 + n.iteration
from names n
cross apply (
-- This function takes the preceding two characters,
-- and gets a random character that follows the pattern
select isnull(dbo.[fn_markov_getNext] (n.char2, n.char3), ',') as addition
) fnc
)
select *
from names
option (maxrecursion 3) -- For debug
The trouble is the union only unions one row.
Example output:
char1 char2 char3 name_in_progress iteration
+ + F ++F 1
+ + N ++N 1
+ + K ++K 1
+ + S ++S 1
+ + B ++B 1
+ B a ++Ba 2
B a c ++Bac 3
a c h ++Bach 4
Note I'm using + and , as null replacers/delimeters.
What I want to see is the entirety of the previous recursion, with the addition of the new characters to the name_in_progress; each pass should modify the entirely of the previous pass.
My desired output would be:
Top 10 of the Markov_Model table:
Text of the function that gets the next character from the Markov_Model:
CREATEFUNCTION [dbo].[fn_markov_getNext]
(
#char2 nvarchar(1),
#char3 nvarchar(1)
)
RETURNS nvarchar(1)
AS
BEGIN
DECLARE #newChar nvarchar(1)
set #newChar = (
select top 1
isnull(char3, ',')
from markov_Model mm
where isnull(mm.char1, '+') = isnull(#char2, '+')
and isnull(mm.char2, '+') = isnull(#char3, ',')
order by (select new_id from vw_getGuid) -- A view that calls newid()
)
return #newChar
END

advice on improving query performance ~ 1 day

I have 5 tables - Each have tens of thousands of records
1 main/very important table (TABLE A)
2 other tables (TABLES B/C) that still important but not as important as table
2 side tables (TABLES D/E)that hold primary keys between A<=>B and A<=>C i.e. only have two columns each
The 3 main tables have ~140 columns each, all have the same column names
The purpose of my query is to perform column level matching between all the tables A<=>D<=>B and A<=>E<=>C in one query
The final query will have about 286 columns (two ID columns from each main table,
select tableA.ID1 as [TABLEAID1],
tableA.ID2 as [TABLEAID2],
tableB.ID1 as [TABLEBID1],
tableB.ID2 as [TABLEBID2],
tableC.ID1 as [TABLECID1],
tableC.ID2 as [TABLECID2],
fn_TESTMatcher(tableA.[postCode], tableB.[postCode],) as
[TABLEAB.postCode.RESULT],
fn_TESTMatcher(tableA.[CityCode], tableB.[CityCode],) as
[TABLEAB.CityCode.RESULT],
.
.
. x238 more 'fn_TESTMatcher(...) as xyz' columns
.
INTO #Results
From tableA WITH (NOLOCK)
FULL JOIN tableD WITH (NOLOCK) ON tableA.ID1 = tableD.A
) FULL JOIN tableB WITH (NOLOCK) ON tableD.B = tableB.ID1
) FULL JOIN tableE WITH (NOLOCK) ON tableA.ID1 = tableE.A
) FULL JOIN tableC WITH (NOLOCK) ON tableE.B = tableC.ID
fn_TESTMatcher is a function, it is fed the same column from two main tables, then it removes/replaces special characters/abbreviations, and then tries to match them, if they match it returns a bit '1', if not then a bit '0'.
at the moment it takes about a day to run (i can't really time it with some sort of query timer), I can comment out all the columns except for the last and run it and its fairly quick, but i dont think i can just scale that up
Does anyone have some advice? My first assumption is to start googling on what indexes are and ...maybe.. apply it to the ID1 of every table although I'm a bit hesitant on a) messing up my tables and b) adding an index that ends up being useless
===========================================
update 2: table structure wise all the columns for all the main tables are varchars, length 100-250 characters, where ID (primary key) is not nullable
With the two side tables, they just have two columns, both varchar, 100 character limit (they're both foreign keys). The most important table's ID in this is not nullable
for functions, i technically have two:
FUNCTION [dbo].[fn_TESTStripCharacters]
(
#String NVARCHAR(MAX) ,
#MatchExpression VARCHAR(255)
)
RETURNS NVARCHAR(MAX)
AS
BEGIN
DECLARE #expres VARCHAR(50) = '%[~,#,#,^,_,+,-,$,%,&,/,|,\,*,(,),.,!,`,:,<,>,?]%'
WHILE PATINDEX( #expres, #String ) > 0
SET #String = REPLACE(REPLACE(REPLACE( #String, SUBSTRING( #String, PATINDEX( #expres, #String ), 1 ),''),';',''),'-','')
RETURN #String
END
and second function
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER FUNCTION [dbo].[fn_TESTMatcher](#Field1 NVARCHAR(MAX), #Field2
NVARCHAR(MAX))
RETURNS BIT
BEGIN
SET #Field1 = UPPER(LTRIM(RTRIM(REPLACE(dbo.fn_TESTStripCharacters(#Field1,#SpecialCharacters),'-',''))))
SET #Field2 = UPPER(LTRIM(RTRIM(REPLACE(dbo.fn_TESTStripCharacters(#Field2,#SpecialCharacters),'-',''))))
SET #Field1 = REPLACE(#Field1,' RD ',' ROAD ')
SET #Field2 = REPLACE(#Field2,' RD ',' ROAD ')
SET #Field1 = REPLACE(#Field1,' ST ',' STREET ')
SET #Field2 = REPLACE(#Field2,' ST ',' STREET ')
SET #Field1 = REPLACE(#Field1,' ','')
SET #Field2 = REPLACE(#Field2,' ','')
RETURN
CASE WHEN #Field1=#Field2
THEN '1'
ELSE '0'
END
END
=============================
update 2
Example table data - assuming the same two records exist in all 3 tables (not always the case )
TableA (main + most important table):
ID1 ID2 postCode, cityCode, ................
10001 1221 IG11PJ London ................
10230 1022 IG22PJ Nottingham ................
tableB (slightly less important table)
ID1 ID2 postCode, cityCode, ................
10031 1011 IG1 1PJ london ................
10980 982 IG2 2PJ nottingham ................
tableC (slightly less important table)
ID1 ID2 postCode, cityCode, ................
10551 1011 iG1 1pj london ................
20980 982 iG2 2pJ nottingham ................
tableD (side table)
A B
10001 10031
10230 10980
table E (side table)
A B
10001 10551
10230 20980
If Tables A, B, and C should be identical save for formatting differences I would suggest you create 3 CTEs, the first selecting TableA ID and a HASHBYTES of all other columns (columns will need to be cast to char/varchar so any formatting and replacing can take place there), the second CTE the same for Table B and the third for Table C.
Then just match the HASHBYTES values.
As has already been said though, without sample data, table structures, DDL for the function etc. we are just guessing.
Sean and Milney both make very good points regarding scalar vs inline table function and use of NOLOCK
I see these as a task that does not belong in one query. I would create a new set of tables (or these tables is you have a backup/don't need to preserve the data) and then perform your data cleaning steps into those new tables.
Once you have happy the data has been normalized then do a single query to compare the tables.
Trying to put it all in one query gives no advantage and you don't step wise make progress. For example, if you find you forgot to strip spaces out of once field you have to redo EVERYTHING. If you make new tables with the "cleaned" data you can incrementally invest time on cleaning the data (which is clearly the slow part of this process) till the data is perfect and then run your quick comparison. Forgot something -- it is a relatively quick update and run.
Instead of all the headaches you are running through, making copies of everything, and then trying to parse based on functions that can not be optimized, I would suggest the following. You state you have a column that gets stripped of special characters. I would add a "CleanKey" column for each table and represented column. Then, via a possible table trigger, or before the add/save, pre-clean that value into the "CleanKey" column and you are done. Then have an index on THOSE "Clean" columns and do a direct join.
Since the rest of the system does not know of these "Clean" columns, you can add the columns, clean them out of whatever function you have and not worry about duplicating or otherwise ruining other data.
Yes, it may take a bit to pre-"Clean" these columns, but then its DONE. Your query should be fast after that.
I would agree with the others that cleaning these string values would be a good idea. But since you still need to accomplish that and I absolutely hate loops and scalar functions with a passion I decided to roll up an inline table valued function instead of these two nested scalar functions. I am not using any loops here and the performance might surprise you.
I am using a tally or numbers table for this. I like to keep one of these around as view. Here is the code for the view I use.
create View [dbo].[cteTally] as
WITH
E1(N) AS (select 1 from (values (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))dt(n)),
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
cteTally(N) AS
(
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
)
select N from cteTally
GO
Then you can use this tally table to derive a set based approach to accommodate the business rules you have for deciding if two values match. You also don't need comma delimiters in here. In your example you had a comma nearly every other character in the list of values to remove. A single instance of each character is sufficient.
create function [dbo].[fn_TESTMatcher_Sean]
(
#Field1 nvarchar(max)
, #Field2 nvarchar(max)
, #CharsToRemove nvarchar(max)
) returns table as
RETURN
with MyValues1 as
(
select substring(#Field1, N, 1) as MyChar
, t.N
from cteTally t
where N <= len(#Field1)
and charindex(substring(#Field1, N, 1), #CharsToRemove) = 0
)
, MyValues2 as
(
select substring(#Field2, N, 1) as MyChar
, t.N
from cteTally t
where N <= len(#Field2)
and charindex(substring(#Field2, N, 1), #CharsToRemove) = 0
)
select convert(bit, case when mv1.MyResult = mv2.MyResult then 1 else 0 end) as IsMatch
from
(
select distinct MyResult =
replace(
replace(replace(STUFF((select MyChar + ''
from MyValues1 mv2
order by mv2.N
FOR XML PATH(''),TYPE).value('.','NVARCHAR(MAX)'), 1, 0, '')
, ' RD ', ' ROAD ')
, ' ST ', ' STREET ')
, ' ', '')
from MyValues1 mv
) mv1
cross join
(
select distinct MyResult =
replace(
replace(replace(STUFF((select MyChar + ''
from MyValues2 mv2
order by mv2.N
FOR XML PATH(''),TYPE).value('.','NVARCHAR(MAX)'), 1, 0, '')
, ' RD ', ' ROAD ')
, ' ST ', ' STREET ')
, ' ', '')
from MyValues2 mv
) mv2
;
Give this a shot and let me know if this works in your environment.
For example:
select *
from fn_TESTMatcher_Sean('123 any st rd or something', '123 any street road or something', '%[~,##^_+-$%&/|\*().!`:<>?]%')
The above returns 1 because they are a match under the rules you defined.

how to modify t-sql to process multiple records not just one.

I am working on a function to remove/ replace special characters from a string from a column named "Title". Currently I am testing the code for one record at a time. I would like to test the code against all the records in the table, but I do not know how to modify the current t-sql to process all the records rather than just one at a time. I would appreciate if someone could show me how, or what type of modifications I need to do to be able to process all records.
This is the code as I have it right now:
DECLARE #str VARCHAR(400);
DECLARE #expres VARCHAR(50) = '%[~,#,#,$,%,&,*,(,),.,!,´,:]%'
SET #str = (SELECT REPLACE(REPLACE(LOWER([a].[Title]), CHAR(9), ''), ' ', '_') FROM [dbo].[a] WHERE [a].[ID] = '43948')
WHILE PATINDEX(#expres, #str) > 0
SET #str = REPLACE(REPLACE(#str, SUBSTRING(#str, PATINDEX(#expres, #str), 1), ''), '-', ' ')
SELECT #str COLLATE SQL_Latin1_General_CP1251_CS_AS
For a Title containing the value: Schöne Wiege Meiner Leiden, the output after the code is applied would be: schone_wiege_meiner_leiden
I would like to make the code work to process multiple records rather that one like is done currently by specifying the ID. I want to process a bulks of records.
I hope I can get some help, thank you in advance for your help.
Code example taken from: remove special characters from string in sql server
There is no need for a loop here. You can instead use a tally table and this can become a set based inline table valued function quite easily. Performance wise it will blow the doors off a loop based scalar function.
I keep a tally table as a view in my system. Here is the code for the tally table.
create View [dbo].[cteTally] as
WITH
E1(N) AS (select 1 from (values (1),(1),(1),(1),(1),(1),(1),(1),(1),(1))dt(n)),
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
cteTally(N) AS
(
SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
)
select N from cteTally
Now comes the fun part, using this to parse strings and all kinds of various things. It has been dubbed the swiss army knife of t-sql. Anytime you start thinking loop, try to think about using a tally table instead. Here is how this function might look.
create function RemoveValuesFromString
(
#SearchVal nvarchar(max)
, #CharsToRemove nvarchar(max)
) returns table as
RETURN
with MyValues as
(
select substring(#SearchVal, N, 1) as MyChar
, t.N
from cteTally t
where N <= len(#SearchVal)
and charindex(substring(#SearchVal, N, 1), #CharsToRemove) = 0
)
select distinct MyResult = STUFF((select MyChar + ''
from MyValues mv2
order by mv2.N
FOR XML PATH(''),TYPE).value('.','NVARCHAR(MAX)'), 1, 0, '')
from MyValues mv
;
Here is an example of how you might be able to use this. I am using a table variable here but this could be any table or whatever.
declare #SomeTable table
(
SomeTableID int identity primary key clustered
, SomeString varchar(max)
)
insert #SomeTable
select 'This coffee cost $32.!!! This is a# tot$#a%l r)*i-p~!`of^%f' union all
select 'This &that'
select *
from #SomeTable st
cross apply dbo.RemoveValuesFromString(st.SomeString, '%[~,##$%&*()!´:]%`^-') x

SQL Server 2012 T-SQL count number of words between elements of two sets

I have two sets of elements, let's say they are these words:
set 1: "nuclear", "fission", "dirty" and
set 2: "device", "explosive"
In my database, I have a text column (Description) which contains a sentence or two. I would like to find any records where Description contains both an element from set 1 followed by an element from set 2, where the two elements are separated by four words or less. For simplicity, counting (spaces-1) will count words between the two elements.
I'd prefer it if a solution didn't require the installation of anything like CLR functions for regular expression. Rather, if this could be done with a user-defined table function, it would make deployment simpler.
Does this sound possible?
It is possible, but i do not think it will preform well with millions of rows.
I have a solution here that handles about 10 000 rows in 2 sec and 100 000 rows in about 20 sec on our server. It also requires the famous DelimitedSplit8K sql table function from SQLServerCentral:
DECLARE #set1 VARCHAR(MAX) = 'nuclear, fission, dirty';
DECLARE #set2 VARCHAR(MAX) = 'device, explosive';
WITH GetDistances AS
(
SELECT
DubID = ROW_NUMBER() OVER (PARTITION BY ID ORDER BY ID)
, Distance = dbo.[cf_ValueSetDistance](s.Description, #set1, #set2)
, s.ID
,s.Description
FROM #sentences s
JOIN dbo.cf_DelimitedSplit8K(#set1, ',') s1 ON s.Description LIKE '%' + RTRIM(LTRIM(s1.Item)) + '%'
JOIN dbo.cf_DelimitedSplit8K(#set2, ',') s2 ON s.Description LIKE '%' + RTRIM(LTRIM(s2.Item)) + '%'
) SELECT Distance, ID, Description FROM GetDistances WHERE DubID = 1 AND Distance BETWEEN 1 AND 4;
--10 000 rows: 2sec
--100 000 rows: 20sec
Test data generator
--DROP TABLE #sentences
CREATE TABLE #sentences
(
ID INT IDENTITY(1,1) PRIMARY KEY
, Description VARCHAR(100)
);
GO
--CREATE 10000 random sentences that are 100 chars long
SET NOCOUNT ON;
WHILE((SELECT COUNT(*) FROM #sentences) < 10000)
BEGIN
DECLARE #randomWord VARCHAR(100) = '';
SELECT TOP 100 #randomWord = #randomWord + ' ' + Item FROM dbo.cf_DelimitedSplit8K('nuclear fission dirty device explosive On the other hand, we denounce with righteous indignation and dislike men who are so beguiled and demoralized by the charms of pleasure of the moment, so blinded by desire, that they cannot foresee the pain and trouble that are bound to ensue; and equal blame belongs to those who fail in their duty through weakness of will, which is the same as saying through shrinking from toil and pain. These cases are perfectly simple and easy to distinguish. In a free hour, when our power of choice is untrammelled and when nothing prevents our being able to do what we like best, every pleasure is to be welcomed and every pain avoided. But in certain circumstances and owing to the claims of duty or the obligations of business it will frequently occur that pleasures have to be repudiated and annoyances accepted. The wise man therefore always holds in these matters to this principle of selection: he rejects pleasures to secure other greater pleasures, or else he endures pains to avoid worse pains', ' ') ORDER BY NEWID();
INSERT INTO #sentences
SELECT #randomWord
END
SET NOCOUNT OFF;
Function 1 - cf_ValueSetDistance
CREATE FUNCTION [dbo].[cf_ValueSetDistance]
(
#value VARCHAR(MAX)
, #compareSet1 VARCHAR(MAX)
, #compareSet2 VARCHAR(MAX)
)
RETURNS INT
AS
BEGIN
SET #value = REPLACE(REPLACE(REPLACE(#value, '.', ''), ',', ''), '?', '');
DECLARE #distance INT;
DECLARE #sentence TABLE( WordIndex INT, Word VARCHAR(MAX) );
DECLARE #set1 TABLE(Word VARCHAR(MAX) );
DECLARE #set2 TABLE(Word VARCHAR(MAX) );
INSERT INTO #sentence
SELECT ItemNumber, RTRIM(LTRIM(Item)) FROM dbo.cf_DelimitedSplit8K(#value, ' ')
INSERT INTO #set1
SELECT RTRIM(LTRIM(Item)) FROM dbo.cf_DelimitedSplit8K(#compareSet1, ',')
IF(EXISTS(SELECT 1 FROM #sentence s JOIN #set1 s1 ON s.Word = s1.Word))
BEGIN
INSERT INTO #set2
SELECT RTRIM(LTRIM(Item)) FROM dbo.cf_DelimitedSplit8K(#compareSet2, ',');
IF(EXISTS(SELECT 1 FROM #sentence s JOIN #set2 s2 ON s.Word = s2.Word))
BEGIN
WITH Set1 AS (
SELECT s.WordIndex, s.Word FROM #sentence s
JOIN #set1 s1 ON s1.Word = s.Word
), Set2 AS
(
SELECT s.WordIndex, s.Word FROM #sentence s
JOIN #set2 s2 ON s2.Word = s.Word
)
SELECT #distance = MIN(ABS(s2.WordIndex - s1.WordIndex)) FROM Set1 s1, Set2 s2
END
END
RETURN #distance;
END
Function 2 - DelimitedSplit8K
(No need to even try to understand this code, this is an extremely fast function for splitting a string to a table, written by several talented people):
CREATE FUNCTION [dbo].[cf_DelimitedSplit8K]
(#pString VARCHAR(8000), #pDelimiter CHAR(1))
RETURNS TABLE WITH SCHEMABINDING AS
RETURN
--===== "Inline" CTE Driven "Tally Table" produces values from 0 up to 10,000...
-- enough to cover NVARCHAR(4000)
WITH E1(N) AS (
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
), --10E+1 or 10 rows
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
cteTally(N) AS (--==== This provides the "base" CTE and limits the number of rows right up front
-- for both a performance gain and prevention of accidental "overruns"
SELECT TOP (ISNULL(DATALENGTH(#pString),0)) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
),
cteStart(N1) AS (--==== This returns N+1 (starting position of each "element" just once for each delimiter)
SELECT 1 UNION ALL
SELECT t.N+1 FROM cteTally t WHERE SUBSTRING(#pString,t.N,1) = #pDelimiter
),
cteLen(N1,L1) AS(--==== Return start and length (for use in substring)
SELECT s.N1,
ISNULL(NULLIF(CHARINDEX(#pDelimiter,#pString,s.N1),0)-s.N1,8000)
FROM cteStart s
)
--===== Do the actual split. The ISNULL/NULLIF combo handles the length for the final element when no delimiter is found.
SELECT ItemNumber = ROW_NUMBER() OVER(ORDER BY l.N1),
Item = SUBSTRING(#pString, l.N1, l.L1)
FROM cteLen l;
I dont know anything about performance, but this could be done with cross apply and two temporary tables.
--initialize word set data
DECLARE #set1 TABLE (wordFromSet varchar(n))
DECLARE #set2 TABLE (wordFromSet varchar(n))
INSERT INTO #set1 SELECT 'nuclear' UNION SELECT 'fission' UNION SELECT 'dirty'
INSERT INTO #set2 SELECT 'device' UNION SELECT 'explosive'
SELECT *
FROM MyTable m
CROSS APPLY
(
SELECT wordFromSet
,LEN(SUBSTRING(m.Description, 1, CHARINDEX(wordFromSet, m.Description))) - LEN(REPLACE(SUBSTRING(m.Description, 1, CHARINDEX(wordFromSet, m.Description)),' ', '')) AS WordPosition
FROM #set1
WHERE m.Description LIKE '%' + wordFromSet + '%'
) w1
CROSS APPLY
(
SELECT wordFromSet
,LEN(SUBSTRING(m.Description, 1, CHARINDEX(wordFromSet, m.Description))) - LEN(REPLACE(SUBSTRING(m.Description, 1, CHARINDEX(wordFromSet, m.Description)),' ', '')) AS WordPosition
FROM #set2
WHERE m.Description LIKE '%' + wordFromSet + '%'
) w2
WHERE w2.WordPosition - w1.WordPosition <= treshold
Essentially it will only return rows from MyTable that have at least a word from both sets, and for these rows it will calculate which word position it holds by calculating the difference in length between the substring that ends at the words position and the same substring with spaces removed.
I am adding a new answer, even if my old one has been accepted and I can see you went for the "FULL TEXT INDEX".
I have looked at the answer #Louis gave, and I think it was clever to use "CROSS APPLY". His answer beats the performance of mine. The only problem is that his code will only compare from the first instance of a word. This made me want to try to combine his answer with the split function I used (DelimitedSplit8K from SQLServerCentral).
This results in a remarkable performance boost, I have tested this on 1 million rows, and the result was almost instant:
My old answer: 5min
#Louis answer: 2min
New answer: 3sec
This do not beet the "FULLTEXT INDEX" performance wise, but it at least supports the word search combination specification you provided in a relatively effective way.
DECLARE #set1 TABLE (Word VARCHAR(50))
DECLARE #set2 TABLE (Word VARCHAR(50))
INSERT INTO #set1 SELECT 'nuclear' UNION SELECT 'fission' UNION SELECT 'dirty'
INSERT INTO #set2 SELECT 'device'UNION SELECT 'explosive'
SELECT * FROM #sentences s
CROSS APPLY
(
SELECT * FROM #set1 s1
JOIN dbo.cf_DelimitedSplit8K(s.Description, ' ') split ON split.Item = s1.Word
) s1
CROSS APPLY
(
SELECT * FROM #set2 s2
JOIN dbo.cf_DelimitedSplit8K(s.Description, ' ') split ON split.Item = s2.Word
) s2
WHERE ABS(s1.ItemNumber - s2.ItemNumber) <= 4;
Look at my old answer for the code for the dbo.cf_COM_DelimitedSplit8K function.

replace value in varchar(max) field with join

I have a table that contains text field with placeholders. Something like this:
Row Notes
1. This is some notes ##placeholder130## this ##myPlaceholder##, #oneMore#. End.
2. Second row...just a ##test#.
(This table contains about 1-5k rows on average. Average number of placeholders in one row is 5-15).
Now, I have a lookup table that looks like this:
Name Value
placeholder130 Dog
myPlaceholder Cat
oneMore Cow
test Horse
(Lookup table will contain anywhere from 10k to 100k records)
I need to find the fastest way to join those placeholders from strings to a lookup table and replace with value. So, my result should look like this (1st row):
This is some notes Dog this Cat, Cow. End.
What I came up with was to split each row into multiple for each placeholder and then join it to lookup table and then concat records back to original row with new values, but it takes around 10-30 seconds on average.
You could try to split the string using a numbers table and rebuild it with for xml path.
select (
select coalesce(L.Value, T.Value)
from Numbers as N
cross apply (select substring(Notes.notes, N.Number, charindex('##', Notes.notes + '##', N.Number) - N.Number)) as T(Value)
left outer join Lookup as L
on L.Name = T.Value
where N.Number <= len(notes) and
substring('##' + notes, Number, 2) = '##'
order by N.Number
for xml path(''), type
).value('text()[1]', 'varchar(max)')
from Notes
SQL Fiddle
I borrowed the string splitting from this blog post by Aaron Bertrand
SQL Server is not very fast with string manipulation, so this is probably best done client-side. Have the client load the entire lookup table, and replace the notes as they arrived.
Having said that, it can of course be done in SQL. Here's a solution with a recursive CTE. It performs one lookup per recursion step:
; with Repl as
(
select row_number() over (order by l.name) rn
, Name
, Value
from Lookup l
)
, Recurse as
(
select Notes
, 0 as rn
from Notes
union all
select replace(Notes, '##' + l.name + '##', l.value)
, r.rn + 1
from Recurse r
join Repl l
on l.rn = r.rn + 1
)
select *
from Recurse
where rn =
(
select count(*)
from Lookup
)
option (maxrecursion 0)
Example at SQL Fiddle.
Another option is a while loop to keep replacing lookups until no more are found:
declare #notes table (notes varchar(max))
insert #notes
select Notes
from Notes
while 1=1
begin
update n
set Notes = replace(n.Notes, '##' + l.name + '##', l.value)
from #notes n
outer apply
(
select top 1 Name
, Value
from Lookup l
where n.Notes like '%##' + l.name + '##%'
) l
where l.name is not null
if ##rowcount = 0
break
end
select *
from #notes
Example at SQL Fiddle.
I second the comment that tsql is just not suited for this operation, but if you must do it in the db here is an example using a function to manage the multiple replace statements.
Since you have a relatively small number of tokens in each note (5-15) and a very large number of tokens (10k-100k) my function first extracts tokens from the input as potential tokens and uses that set to join to your lookup (dbo.Token below). It was far too much work to look for an occurrence of any of your tokens in each note.
I did a bit of perf testing using 50k tokens and 5k notes and this function runs really well, completing in <2 seconds (on my laptop). Please report back how this strategy performs for you.
note: In your example data the token format was not consistent (##_#, ##_##, #_#), I am guessing this was simply a typo and assume all tokens take the form of ##TokenName##.
--setup
if object_id('dbo.[Lookup]') is not null
drop table dbo.[Lookup];
go
if object_id('dbo.fn_ReplaceLookups') is not null
drop function dbo.fn_ReplaceLookups;
go
create table dbo.[Lookup] (LookupName varchar(100) primary key, LookupValue varchar(100));
insert into dbo.[Lookup]
select '##placeholder130##','Dog' union all
select '##myPlaceholder##','Cat' union all
select '##oneMore##','Cow' union all
select '##test##','Horse';
go
create function [dbo].[fn_ReplaceLookups](#input varchar(max))
returns varchar(max)
as
begin
declare #xml xml;
select #xml = cast(('<r><i>'+replace(#input,'##' ,'</i><i>')+'</i></r>') as xml);
--extract the potential tokens
declare #LookupsInString table (LookupName varchar(100) primary key);
insert into #LookupsInString
select distinct '##'+v+'##'
from ( select [v] = r.n.value('(./text())[1]', 'varchar(100)'),
[r] = row_number() over (order by n)
from #xml.nodes('r/i') r(n)
)d(v,r)
where r%2=0;
--tokenize the input
select #input = replace(#input, l.LookupName, l.LookupValue)
from dbo.[Lookup] l
join #LookupsInString lis on
l.LookupName = lis.LookupName;
return #input;
end
go
return
--usage
declare #Notes table ([Id] int primary key, notes varchar(100));
insert into #Notes
select 1, 'This is some notes ##placeholder130## this ##myPlaceholder##, ##oneMore##. End.' union all
select 2, 'Second row...just a ##test##.';
select *,
dbo.fn_ReplaceLookups(notes)
from #Notes;
Returns:
Tokenized
--------------------------------------------------------
This is some notes Dog this Cat, Cow. End.
Second row...just a Horse.
Try this
;WITH CTE (org, calc, [Notes], [level]) AS
(
SELECT [Notes], [Notes], CONVERT(varchar(MAX),[Notes]), 0 FROM PlaceholderTable
UNION ALL
SELECT CTE.org, CTE.[Notes],
CONVERT(varchar(MAX), REPLACE(CTE.[Notes],'##' + T.[Name] + '##', T.[Value])), CTE.[level] + 1
FROM CTE
INNER JOIN LookupTable T ON CTE.[Notes] LIKE '%##' + T.[Name] + '##%'
)
SELECT DISTINCT org, [Notes], level FROM CTE
WHERE [level] = (SELECT MAX(level) FROM CTE c WHERE CTE.org = c.org)
SQL FIDDLE DEMO
Check the below devioblog post for reference
devioblog post
To get speed, you can preprocess the note templates into a more efficient form. This will be a sequence of fragments, with each ending in a substitution. The substitution might be NULL for the last fragment.
Notes
Id FragSeq Text SubsId
1 1 'This is some notes ' 1
1 2 ' this ' 2
1 3 ', ' 3
1 4 '. End.' null
2 1 'Second row...just a ' 4
2 2 '.' null
Subs
Id Name Value
1 'placeholder130' 'Dog'
2 'myPlaceholder' 'Cat'
3 'oneMore' 'Cow'
4 'test' 'Horse'
Now we can do the substitutions with a simple join.
SELECT Notes.Text + COALESCE(Subs.Value, '')
FROM Notes LEFT JOIN Subs
ON SubsId = Subs.Id WHERE Notes.Id = ?
ORDER BY FragSeq
This produces a list of fragments with substitutions complete. I am not an MSQL user, but in most dialects of SQL you can concatenate these fragments in a variable quite easily:
DECLARE #Note VARCHAR(8000)
SELECT #Note = COALESCE(#Note, '') + Notes.Text + COALSCE(Subs.Value, '')
FROM Notes LEFT JOIN Subs
ON SubsId = Subs.Id WHERE Notes.Id = ?
ORDER BY FragSeq
Pre-processing a note template into fragments will be straightforward using the string splitting techniques of other posts.
Unfortunately I'm not at a location where I can test this, but it ought to work fine.
I really don't know how it will perform with 10k+ of lookups.
how does the old dynamic SQL performs?
DECLARE #sqlCommand NVARCHAR(MAX)
SELECT #sqlCommand = N'PlaceholderTable.[Notes]'
SELECT #sqlCommand = 'REPLACE( ' + #sqlCommand +
', ''##' + LookupTable.[Name] + '##'', ''' +
LookupTable.[Value] + ''')'
FROM LookupTable
SELECT #sqlCommand = 'SELECT *, ' + #sqlCommand + ' FROM PlaceholderTable'
EXECUTE sp_executesql #sqlCommand
Fiddle demo
And now for some recursive CTE.
If your indexes are correctly set up, this one should be very fast or very slow. SQL Server always surprises me with performance extremes when it comes to the r-CTE...
;WITH T AS (
SELECT
Row,
StartIdx = 1, -- 1 as first starting index
EndIdx = CAST(patindex('%##%', Notes) as int), -- first ending index
Result = substring(Notes, 1, patindex('%##%', Notes) - 1)
-- (first) temp result bounded by indexes
FROM PlaceholderTable -- **this is your source table**
UNION ALL
SELECT
pt.Row,
StartIdx = newstartidx, -- starting index (calculated in calc1)
EndIdx = EndIdx + CAST(newendidx as int) + 1, -- ending index (calculated in calc4 + total offset)
Result = Result + CAST(ISNULL(newtokensub, newtoken) as nvarchar(max))
-- temp result taken from subquery or original
FROM
T
JOIN PlaceholderTable pt -- **this is your source table**
ON pt.Row = T.Row
CROSS APPLY(
SELECT newstartidx = EndIdx + 2 -- new starting index moved by 2 from last end ('##')
) calc1
CROSS APPLY(
SELECT newtxt = substring(pt.Notes, newstartidx, len(pt.Notes))
-- current piece of txt we work on
) calc2
CROSS APPLY(
SELECT patidx = patindex('%##%', newtxt) -- current index of '##'
) calc3
CROSS APPLY(
SELECT newendidx = CASE
WHEN patidx = 0 THEN len(newtxt) + 1
ELSE patidx END -- if last piece of txt, end with its length
) calc4
CROSS APPLY(
SELECT newtoken = substring(pt.Notes, newstartidx, newendidx - 1)
-- get the new token
) calc5
OUTER APPLY(
SELECT newtokensub = Value
FROM LookupTable
WHERE Name = newtoken -- substitute the token if you can find it in **your lookup table**
) calc6
WHERE newstartidx + len(newtxt) - 1 <= len(pt.Notes)
-- do this while {new starting index} + {length of txt we work on} exceeds total length
)
,lastProcessed AS (
SELECT
Row,
Result,
rn = row_number() over(partition by Row order by StartIdx desc)
FROM T
) -- enumerate all (including intermediate) results
SELECT *
FROM lastProcessed
WHERE rn = 1 -- filter out intermediate results (display only last ones)