SQL - Move multiple rows worth of data to one row - sql

What I am trying to do is take multiple rows of data from a column and insert it into a single cell. Here's what I have below:
+++HouseNumber+++++++CustomerType+++
+ 1 + Residential +
+ 2 + Commercial +
+ 2 + Residential +
+ 3 + Residential +
++++++++++++++++++++++++++++++++++++
And I need to get this to something that looks like this:
+++HouseNumber+++++++CustomerType+++++++++++++++
+ 1 + Residential +
+ 2 + Commercial Residential +
+ 3 + Residential +
++++++++++++++++++++++++++++++++++++++++++++++++
I realize that this is against the normalization thing; however, I simply need this data displayed this way so that I can view it more easily later on, the particular cell will never again be referenced for any individual item within it.
I tried to do this by creating two tables, one with a tempCustomerType, and one with a a customerType field orignally NULL and then update using the following:
UPDATE CustomerIdentifier
SET CustomerIdentifier.CustomerType = TempTable2.CustomerTypeTemp + CustomerIdentifier.CustomerType
FROM CustomerIdentifier
INNER JOIN TempTable2
ON CustomerIdentifier.SUB_ACCT_NO_OCI = TempTable2.SUB_ACCT_NO_OCI
However, after that each field was still null. So, any chance anyone here can help me? Thanks!
Also, if there is a way to do this without creating a second table, that would be great as well.

NULL + 1 in T/SQL always will return null;
The solutions for you problem are described here

We implemented our own CLR aggregate function as described here, you can then write:
DECLARE #test TABLE (
HouseNumber INT,
CustomerType VARCHAR(16)
)
INSERT INTO #test
SELECT 1, 'Residential'
UNION SELECT 2, 'Commercial'
UNION SELECT 2, 'Residential'
UNION SELECT 3, 'Residential'
SELECT HouseNumber, dbo.Concatenate(CustomerType)
FROM #test
GROUP BY HouseNumber

Below a simpler solution to the problem. Unfortunately untested on my machine (sql server install borked), I will test tomorrow and edit the answer if necessary.
This will work with SQL 2005 and above and doesn't require any UDFs or CLR. It is also pretty fast too.
/* Test Table & Data */
DECLARE #TestTable TABLE
(
HouseNumber int,
CustomerType varchar(12)
)
;
INSERT #TestTable
SELECT 1, 'Residential' UNION ALL
SELECT 2, 'Commercial' UNION ALL
SELECT 2, 'Residential' UNION ALL
SELECT 3, 'Residential' UNION ALL
;
/* CTE to construct the concatenated data. */
WITH ConcatData (HouseNumber,CustomerType) as
(
SELECT HouseNumber,STUFF((SELECT ', ' + CustomerType
FROM #TestTable TT2
FOR XML PATH ('')
WHERE TT2.HouseNumber = TT1.HouseNumber),1,2,'')
FROM TestTable TT1
GROUP BY TT1.HouseNumber
)
/* Update the test table using the concatenated data from the CTE - joining on HouseNumber */
UPDATE trg
SET CustomerType = src.CustomerType
FROM #TestTable trg
INNER JOIN ConcatData src on src.HouseNumber = trg.HouseNumber

Related

Efficient way to merge alternating values from two columns into one column in SQL Server

I have two columns in a table. I want to merge them into a single column, but the merge should be done taking alternate characters from each columns.
For example:
Column A --> value (1,2,3)
Column B --> value (A,B,C)
Required result - (1,A,2,B,3,C)
It should be done without loops.
You need to make use of the UNION and get a little creative with how you choose to alternate. My solution ended up looking like this.
SELECT ColumnA
FROM Table
WHERE ColumnA%2=1
UNION
SELECT ColumnB
FROM TABLE
WHERE ColumnA%2=0
If you have an ID/PK column that could just as easily be used, I just didn't want to assume anything about your table.
EDIT:
If your table contains duplicates that you wish to keep, use UNION ALL instead of UNION
Try This;
SELECT [value]
FROM [Table]
UNPIVOT
(
[value] FOR [Column] IN ([Column_A], [Column_B])
) UNPVT
If you have SQL 2016 or higher you can use:
SELECT QUOTENAME(STRING_AGG (cast(a as varchar(1)) + ',' + b, ','), '()')
FROM test;
In older versions, depending on how much data you have in your tables you can also try:
SELECT QUOTENAME(STUFF(
(SELECT ',' + cast(a as varchar(1)) + ',' + b
FROM test
FOR XML PATH('')), 1, 1,''), '()')
Here you can try a sample
http://sqlfiddle.com/#!18/6c9af/5
with data as (
select *, row_number() over order by colA) as rn
from t
)
select rn,
case rn % 2 when 1 then colA else colB end as alternating
from data;
The following SQL uses undocumented aggregate concatenation technique. This is described in Inside Microsoft SQL Server 2008 T-SQL Programming on page 33.
declare #x varchar(max) = '';
declare #t table (a varchar(10), b varchar(10));
insert into #t values (1,'A'), (2,'B'),(3,'C');
select #x = #x + a + ',' + b + ','
from #t;
select '(' + LEFT(#x, LEN(#x) - 1) + ')';

Query help consolidating two tables

I have two tables (same structure) from two different databases that I'd like to consolidate using a single query if possible.
I'm trying to retrieve all distinct serial numbers and their item name, and two category identifiers. The serial number is stored in 4 fields though. The other problem is the name and category field wont always be the same between the two tables (even though they should be - but that's another issue all together). So, I want the query to return distinct SNs and the name and cat fields from the first table.
So I started with:
SELECT
LEFT(NUMBR_1,4) + '-' + LEFT(NUMBR_2,4) + '-' + LEFT(NUMBR_3,3) + '-' + LEFT(NUMBR_4,5) AS SN
,DESCR
,TYP
,ATNUM
FROM DB1.dbo.table1
UNION
SELECT
LEFT(NUMBR_1,4) + '-' + LEFT(NUMBR_2,4) + '-' + LEFT(NUMBR_3,3) + '-' + LEFT(NUMBR_4,5) AS SN
,DESCR
,TYP
,ATNUM
FROM DB2.dbo.table2
From there I'd manually complete the consolidation in Excel and feed that data into the necessary report. I was hoping to get the final result using just SQL, but doing so is outside of my skill set.
I wrapped the above query in another select to get distinct or group by SN - which gets me the final consolidated list of SN. However, because those values themselves weren't something I could use to then query the other fields from the first table (at least that I could figure out), I wasn't sure how to proceed. Any help would be appreciated. Thanks.
SELECT
LEFT(NUMBR_1,4) + '-' + LEFT(NUMBR_2,4) + '-' + LEFT(NUMBR_3,3) + '-' + LEFT(NUMBR_4,5) AS SN,
,coalesce(t1.DESCR, t2.DESCR) DESCR,
,coalesce(t1.TYP, t2.TYP) TYP
,coalesce(t1.ATNUM, t2.ATNUM) ATNUM
FROM DB1.dbo.table1 t1
FULL JOIN DB2.dbo.table2 t2 ON
t1.NUMBR_1 = t2.NUMBR_1 AND t1.NUMBR_2 = t2.NUMBR_2 AND t1.NUMBR_3 = t2.NUMBR_3 AND t1.NUMBR_4 = t2.NUMBR_4
Similar answer to Joel who beat me to it, though this will actually run. Just swap out #t1 and #t2 for your table names. FULL JOINs return all records from both tables, and where there is no match, returns NULLs for one side and the unmatched values for the other:
declare #t1 table (numbr_1 int
,numbr_2 int
,numbr_3 int
,numbr_4 int
,descr nvarchar(50)
,typ nvarchar(50)
,atnum int
);
declare #t2 table (numbr_1 int
,numbr_2 int
,numbr_3 int
,numbr_4 int
,descr nvarchar(50)
,typ nvarchar(50)
,atnum int
);
insert into #t1 values
(1,1,1,1,'d1','t1',1)
,(1,1,1,2,'d2','t1',1)
,(1,1,1,3,'d3','t2',2)
,(1,1,2,1,'d4','t2',3)
,(1,1,2,2,'d5','t2',4);
insert into #t2 values
(1,1,1,1,'d6','t1',1)
,(1,1,1,2,'d7','t3',1)
,(1,2,1,3,'d8','t4',2)
,(1,2,2,1,'d9','t4',3)
,(1,2,2,2,'d5','t2',4);
select coalesce(left(t1.numbr_1,4) + '-' + left(t1.numbr_2,4) + '-' + left(t1.numbr_3,4) + '-' + left(t1.numbr_4,4)
,left(t2.numbr_1,4) + '-' + left(t2.numbr_2,4) + '-' + left(t2.numbr_3,4) + '-' + left(t2.numbr_4,4)
) as ID
,coalesce(t1.descr,t2.descr) as descr
,coalesce(t1.typ,t2.typ) as typ
,coalesce(t1.atnum,t2.atnum) as atnum
from #t1 t1
full join #t2 t2
on(t1.numbr_1 = t2.numbr_1
and t1.numbr_2 = t2.numbr_2
and t1.numbr_3 = t2.numbr_3
and t1.numbr_4 = t2.numbr_4
);

SQL Server 2008 - For XML Path is very slow

I need help to increase the performance of my query, the example is below. I have a SELECT query list multiple fields of CUSTOMER table, this is really fast, about 15 ms. However, when I include the statement below using FOR XML PATH to grab the Customer POs (multiple) and combine them into one column, it is very slow, but it works.
Any suggestion how to increase the performance, but still getting the same results (combine the Customer POs into one column)? A sample code would be appreciated.
Select
Col1, Col2,
(SELECT
STUFF((SELECT ', ' + CustomerPO
FROM dbo.Tbl_CustomerPO
WHERE CustomerID = cus.CustomerID
FOR XML PATH('')), 1, 1, '')
) AS CustomerPOs
FROM Tbl_Customer cus
Thank you,
Try something like this ....
With a TINY recordset the "string concatenating" function method appears better from an execution plan cost, IO, and Time bases.
IF OBJECT_ID(N'fnConcatenateCustPOs', N'fN') IS NOT NULL
BEGIN
DROP FUNCTION dbo.fnConcatenateCustPOs
END
GO
CREATE FUNCTION dbo.fnConcatenateCustPOs
(
#CustomerID INT
)
RETURNS nvarchar(max)
--WITH ENCRYPTION
AS
BEGIN
DECLARE #StrFP nvarchar(3750)
--DECLARE #Custpo TABLE(CustomerPOId INT, CustomerID INT)
SET #StrFP = ''
SET #StrFP = ''
SELECT #StrFP = + #StrFP + ',' + CAST(CustomerPOId AS nvarchar(50))
FROM Custpo co
WHERE co.CustomerID = #CustomerID
RETURN SUBSTRING(#StrFP, 2, LEN(#StrFP))
END
GO
IF OBJECT_ID(N'Cust', N'U') IS NOT NULL
BEGIN
DROP TABLE Cust
END
IF OBJECT_ID(N'Custpo', N'U') IS NOT NULL
BEGIN
DROP TABLE CustPO
END
CREATE TABLE Cust (CustomerID INT)
CREATE TABLE CustPO (CustomerPOId INT, CustomerId INT)
INSERT Cust
SELECT 1
UNION
SELECT 2
INSERT CustPO
SELECT 10, 1
UNION
SELECT 20, 1
UNION
SELECT 30, 2
UNION
SELECT 31, 2
SET STATISTICS IO ON
SET STATISTICS TIME ON
SELECT CustomerId, dbo.fnConcatenateCustPOs(CustomerID)
FROM Cust cus
Select
CustomerID,
(SELECT
STUFF((SELECT ', ' + CAST(CustomerPOId AS nvarchar(50))
FROM dbo.CustPO
WHERE CustomerID = cus.CustomerID
FOR XML PATH('')), 1, 1, '')
) AS CustomerPOs
FROM Cust cus
SET STATISTICS IO OFF
SET STATISTICS TIME OFFTry:
Select
Col1, Col2,
(
STUFF((', ' + CustomerPO
FOR XML PATH('')), 1, 1, '')
) AS CustomerPOs
FROM Tbl_Customer cus
INNER JOIN dbo.Tbl_CustomerPO cpo ON cus.CustomerID = cpo.CustomerID
You're introducing a JOIN to your query, which will inherently affect performance.
If you index the joining field CustomerID you can speed up this query. Not much else to do here.
Note: Since you're prefixing your CustomerPo list with a comma and a space, you should use:
FOR XML PATH('')), 1, 2, '')
If you don't want your resulting strings to all start with a space.

replace value in varchar(max) field with join

I have a table that contains text field with placeholders. Something like this:
Row Notes
1. This is some notes ##placeholder130## this ##myPlaceholder##, #oneMore#. End.
2. Second row...just a ##test#.
(This table contains about 1-5k rows on average. Average number of placeholders in one row is 5-15).
Now, I have a lookup table that looks like this:
Name Value
placeholder130 Dog
myPlaceholder Cat
oneMore Cow
test Horse
(Lookup table will contain anywhere from 10k to 100k records)
I need to find the fastest way to join those placeholders from strings to a lookup table and replace with value. So, my result should look like this (1st row):
This is some notes Dog this Cat, Cow. End.
What I came up with was to split each row into multiple for each placeholder and then join it to lookup table and then concat records back to original row with new values, but it takes around 10-30 seconds on average.
You could try to split the string using a numbers table and rebuild it with for xml path.
select (
select coalesce(L.Value, T.Value)
from Numbers as N
cross apply (select substring(Notes.notes, N.Number, charindex('##', Notes.notes + '##', N.Number) - N.Number)) as T(Value)
left outer join Lookup as L
on L.Name = T.Value
where N.Number <= len(notes) and
substring('##' + notes, Number, 2) = '##'
order by N.Number
for xml path(''), type
).value('text()[1]', 'varchar(max)')
from Notes
SQL Fiddle
I borrowed the string splitting from this blog post by Aaron Bertrand
SQL Server is not very fast with string manipulation, so this is probably best done client-side. Have the client load the entire lookup table, and replace the notes as they arrived.
Having said that, it can of course be done in SQL. Here's a solution with a recursive CTE. It performs one lookup per recursion step:
; with Repl as
(
select row_number() over (order by l.name) rn
, Name
, Value
from Lookup l
)
, Recurse as
(
select Notes
, 0 as rn
from Notes
union all
select replace(Notes, '##' + l.name + '##', l.value)
, r.rn + 1
from Recurse r
join Repl l
on l.rn = r.rn + 1
)
select *
from Recurse
where rn =
(
select count(*)
from Lookup
)
option (maxrecursion 0)
Example at SQL Fiddle.
Another option is a while loop to keep replacing lookups until no more are found:
declare #notes table (notes varchar(max))
insert #notes
select Notes
from Notes
while 1=1
begin
update n
set Notes = replace(n.Notes, '##' + l.name + '##', l.value)
from #notes n
outer apply
(
select top 1 Name
, Value
from Lookup l
where n.Notes like '%##' + l.name + '##%'
) l
where l.name is not null
if ##rowcount = 0
break
end
select *
from #notes
Example at SQL Fiddle.
I second the comment that tsql is just not suited for this operation, but if you must do it in the db here is an example using a function to manage the multiple replace statements.
Since you have a relatively small number of tokens in each note (5-15) and a very large number of tokens (10k-100k) my function first extracts tokens from the input as potential tokens and uses that set to join to your lookup (dbo.Token below). It was far too much work to look for an occurrence of any of your tokens in each note.
I did a bit of perf testing using 50k tokens and 5k notes and this function runs really well, completing in <2 seconds (on my laptop). Please report back how this strategy performs for you.
note: In your example data the token format was not consistent (##_#, ##_##, #_#), I am guessing this was simply a typo and assume all tokens take the form of ##TokenName##.
--setup
if object_id('dbo.[Lookup]') is not null
drop table dbo.[Lookup];
go
if object_id('dbo.fn_ReplaceLookups') is not null
drop function dbo.fn_ReplaceLookups;
go
create table dbo.[Lookup] (LookupName varchar(100) primary key, LookupValue varchar(100));
insert into dbo.[Lookup]
select '##placeholder130##','Dog' union all
select '##myPlaceholder##','Cat' union all
select '##oneMore##','Cow' union all
select '##test##','Horse';
go
create function [dbo].[fn_ReplaceLookups](#input varchar(max))
returns varchar(max)
as
begin
declare #xml xml;
select #xml = cast(('<r><i>'+replace(#input,'##' ,'</i><i>')+'</i></r>') as xml);
--extract the potential tokens
declare #LookupsInString table (LookupName varchar(100) primary key);
insert into #LookupsInString
select distinct '##'+v+'##'
from ( select [v] = r.n.value('(./text())[1]', 'varchar(100)'),
[r] = row_number() over (order by n)
from #xml.nodes('r/i') r(n)
)d(v,r)
where r%2=0;
--tokenize the input
select #input = replace(#input, l.LookupName, l.LookupValue)
from dbo.[Lookup] l
join #LookupsInString lis on
l.LookupName = lis.LookupName;
return #input;
end
go
return
--usage
declare #Notes table ([Id] int primary key, notes varchar(100));
insert into #Notes
select 1, 'This is some notes ##placeholder130## this ##myPlaceholder##, ##oneMore##. End.' union all
select 2, 'Second row...just a ##test##.';
select *,
dbo.fn_ReplaceLookups(notes)
from #Notes;
Returns:
Tokenized
--------------------------------------------------------
This is some notes Dog this Cat, Cow. End.
Second row...just a Horse.
Try this
;WITH CTE (org, calc, [Notes], [level]) AS
(
SELECT [Notes], [Notes], CONVERT(varchar(MAX),[Notes]), 0 FROM PlaceholderTable
UNION ALL
SELECT CTE.org, CTE.[Notes],
CONVERT(varchar(MAX), REPLACE(CTE.[Notes],'##' + T.[Name] + '##', T.[Value])), CTE.[level] + 1
FROM CTE
INNER JOIN LookupTable T ON CTE.[Notes] LIKE '%##' + T.[Name] + '##%'
)
SELECT DISTINCT org, [Notes], level FROM CTE
WHERE [level] = (SELECT MAX(level) FROM CTE c WHERE CTE.org = c.org)
SQL FIDDLE DEMO
Check the below devioblog post for reference
devioblog post
To get speed, you can preprocess the note templates into a more efficient form. This will be a sequence of fragments, with each ending in a substitution. The substitution might be NULL for the last fragment.
Notes
Id FragSeq Text SubsId
1 1 'This is some notes ' 1
1 2 ' this ' 2
1 3 ', ' 3
1 4 '. End.' null
2 1 'Second row...just a ' 4
2 2 '.' null
Subs
Id Name Value
1 'placeholder130' 'Dog'
2 'myPlaceholder' 'Cat'
3 'oneMore' 'Cow'
4 'test' 'Horse'
Now we can do the substitutions with a simple join.
SELECT Notes.Text + COALESCE(Subs.Value, '')
FROM Notes LEFT JOIN Subs
ON SubsId = Subs.Id WHERE Notes.Id = ?
ORDER BY FragSeq
This produces a list of fragments with substitutions complete. I am not an MSQL user, but in most dialects of SQL you can concatenate these fragments in a variable quite easily:
DECLARE #Note VARCHAR(8000)
SELECT #Note = COALESCE(#Note, '') + Notes.Text + COALSCE(Subs.Value, '')
FROM Notes LEFT JOIN Subs
ON SubsId = Subs.Id WHERE Notes.Id = ?
ORDER BY FragSeq
Pre-processing a note template into fragments will be straightforward using the string splitting techniques of other posts.
Unfortunately I'm not at a location where I can test this, but it ought to work fine.
I really don't know how it will perform with 10k+ of lookups.
how does the old dynamic SQL performs?
DECLARE #sqlCommand NVARCHAR(MAX)
SELECT #sqlCommand = N'PlaceholderTable.[Notes]'
SELECT #sqlCommand = 'REPLACE( ' + #sqlCommand +
', ''##' + LookupTable.[Name] + '##'', ''' +
LookupTable.[Value] + ''')'
FROM LookupTable
SELECT #sqlCommand = 'SELECT *, ' + #sqlCommand + ' FROM PlaceholderTable'
EXECUTE sp_executesql #sqlCommand
Fiddle demo
And now for some recursive CTE.
If your indexes are correctly set up, this one should be very fast or very slow. SQL Server always surprises me with performance extremes when it comes to the r-CTE...
;WITH T AS (
SELECT
Row,
StartIdx = 1, -- 1 as first starting index
EndIdx = CAST(patindex('%##%', Notes) as int), -- first ending index
Result = substring(Notes, 1, patindex('%##%', Notes) - 1)
-- (first) temp result bounded by indexes
FROM PlaceholderTable -- **this is your source table**
UNION ALL
SELECT
pt.Row,
StartIdx = newstartidx, -- starting index (calculated in calc1)
EndIdx = EndIdx + CAST(newendidx as int) + 1, -- ending index (calculated in calc4 + total offset)
Result = Result + CAST(ISNULL(newtokensub, newtoken) as nvarchar(max))
-- temp result taken from subquery or original
FROM
T
JOIN PlaceholderTable pt -- **this is your source table**
ON pt.Row = T.Row
CROSS APPLY(
SELECT newstartidx = EndIdx + 2 -- new starting index moved by 2 from last end ('##')
) calc1
CROSS APPLY(
SELECT newtxt = substring(pt.Notes, newstartidx, len(pt.Notes))
-- current piece of txt we work on
) calc2
CROSS APPLY(
SELECT patidx = patindex('%##%', newtxt) -- current index of '##'
) calc3
CROSS APPLY(
SELECT newendidx = CASE
WHEN patidx = 0 THEN len(newtxt) + 1
ELSE patidx END -- if last piece of txt, end with its length
) calc4
CROSS APPLY(
SELECT newtoken = substring(pt.Notes, newstartidx, newendidx - 1)
-- get the new token
) calc5
OUTER APPLY(
SELECT newtokensub = Value
FROM LookupTable
WHERE Name = newtoken -- substitute the token if you can find it in **your lookup table**
) calc6
WHERE newstartidx + len(newtxt) - 1 <= len(pt.Notes)
-- do this while {new starting index} + {length of txt we work on} exceeds total length
)
,lastProcessed AS (
SELECT
Row,
Result,
rn = row_number() over(partition by Row order by StartIdx desc)
FROM T
) -- enumerate all (including intermediate) results
SELECT *
FROM lastProcessed
WHERE rn = 1 -- filter out intermediate results (display only last ones)

Parse SQL field into multiple rows

How can I take a SQL table that looks like this:
MemberNumber JoinDate Associate
1234 1/1/2011 A1 free A2 upgrade A31
5678 3/15/2011 A4
9012 5/10/2011 free
And output (using a view or writing to another table or whatever is easiest) this:
MemberNumber Date
1234-P 1/1/2011
1234-A1 1/1/2011
1234-A2 1/1/2011
1234-A31 1/1/2011
5678-P 3/15/2011
5678-A4 3/15/2011
9012-P 5/10/2011
Where each row results in a "-P" (primary) output line as well as any A# (associate) lines. The Associate field can contain a number of different non-"A#" values, but the "A#"s are all I'm interested in (# is from 1 to 99). There can be many "A#"s in that one field too.
Of course a table redesign would greatly simplify this query but sometimes we just need to get it done. I wrote the below query using multiple CTEs; I find its easier to follow and see exactly whats going on, but you could simplify this further once you grasp the technique.
To inject your "P" primary row you will see that I simply jammed it into Associate column but it might be better placed in a simple UNION outside the CTEs.
In addition, if you do choose to refactor your schema the below technique can be used to "split" your Associate column into rows.
;with
Split (MemberNumber, JoinDate, AssociateItem)
as ( select MemberNumber, JoinDate, p.n.value('(./text())[1]','varchar(25)')
from ( select MemberNumber, JoinDate, n=cast('<n>'+replace(Associate + ' P',' ','</n><n>')+'</n>' as xml).query('.')
from #t
) a
cross apply n.nodes('n') p(n)
)
select MemberNumber + '-' + AssociateItem,
JoinDate
from Split
where left(AssociateItem, 1) in ('A','P')
order
by MemberNumber;
The XML method is not a great option performance-wise, as its speed degrades as the number of items in the "array" increases. If you have long arrays the follow approach might be of use to you:
--* should be physical table, but use this cte if needed
--;with
--number (n)
--as ( select top(50) row_number() over(order by number) as n
-- from master..spt_values
-- )
select MemberNumber + '-' + substring(Associate, n, isnull(nullif(charindex(' ', Associate + ' P', n)-1, -1), len(Associate)) - n+1),
JoinDate
from ( select MemberNumber, JoinDate, Associate + ' P' from #t
) t (MemberNumber, JoinDate, Associate)
cross
apply number n
where n <= convert(int, len(Associate)) and
substring(' ' + Associate, n, 1) = ' ' and
left(substring(Associate, n, isnull(nullif(charindex(' ', Associate, n)-1, -1), len(Associate)) - n+1), 1) in ('A', 'P');
Try this new version
declare #t table (MemberNumber varchar(8), JoinDate date, Associate varchar(50))
insert into #t values ('1234', '1/1/2011', 'A1 free A2 upgrade A31'),('5678', '3/15/2011', 'A4'),('9012', '5/10/2011', 'free')
;with b(f, t, membernumber, joindate, associate)
as
(
select 1, 0, membernumber, joindate, Associate
from #t
union all
select t+1, charindex(' ',Associate + ' ', t+1), membernumber, joindate, Associate
from b
where t < len(Associate)
)
select MemberNumber + case when t = 0 then '-P' else '-'+substring(Associate, f,t-f) end NewMemberNumber, JoinDate
from b
where t = 0 or substring(Associate, f,1) = 'A'
--where t = 0 or substring(Associate, f,2) like 'A[1-9]'
-- order by MemberNumber, t
Result is the same as the requested output.
I would recommend altering your database structure by adding a link table instead of the "Associate" column. A link table would consist of two or more columns like this:
MemberNumber Associate Details
-----------------------------------
1234 A1 free
1234 A2 upgrade
1234 A31
5678 A4
Then the desired result can be obtained with a simple JOIN:
SELECT CONCAT(m.`MemberNumber`, '-', 'P'), m.`JoinDate`
FROM `members` m
UNION
SELECT CONCAT(m.`MemberNumber`, '-', IFNULL(a.`Associate`, 'P')), m.`JoinDate`
FROM `members` m
RIGHT JOIN `members_associates` a ON m.`MemberNumber` = a.`MemberNumber`