Ignoring specific letters to find match in SQL Query - sql

I want to query a table for all the values that are on a list on another table to find matches, but I know that some of the values in either table may be typed in incorrectly. One table may have '10Hf7K8' and another table may have '1OHf7K8' but I still want them to match.
Another example, if one table has 'STOP' but I know that in myTable, some of fields may say '5T0P' or 'ST0P' or '5TOP'. I want those to come up as results too. The same thing may occur for '2' and 'Z' if I want 'ZEPT' and '2EPT' to match.
So if I know to account for inconsistencies between '0' and 'O', '5' and 'S' and 'Z' and '2', and knowing that they will be in the same spot, but I do not know where exactly they will be in the word or how many letters the word will have, is it possible to make a query ignoring those letters?
Additional Information: These values are hundreds of serial keys that I have no way of confirming which is correct version between the two tables. I should not have used actual words for my example, these values can be any combination of letters and numbers in any order. There is no distinct pattern that I can hard code.
SOLUTION: Goat CO, Learning, and user3216429's answers contained the solution I needed. I was able to find matching values while keeping the underlying data.

Cleaning data is preferable, but could use nested REPLACE() statements if you can't alter the underlying data:
SELECT *
FROM Table1 a
JOIN Table2 b
ON REPLACE(REPLACE(REPLACE(a.field1,'2','Z'),'5','S'),'0','O') = REPLACE(REPLACE(REPLACE(b.field1,'2','Z'),'5','S'),'0','O')
Cleansing the data could be the same nested replace statement:
ALTER TABLE Table1 ADD cleanfield VARCHAR(25)
UPDATE Table1
SET cleanfield = REPLACE(REPLACE(REPLACE(dirtyfield,'2','Z'),'5','S'),'0','O')
Then you'd be able to join the tables on the clean field.

what you can and should do is to clean your data, replace all these 2,0,5 with Z,O and S.
But if you want to try some other solution, then you can try something like this
select case when
REPLACE(REPLACE(REPLACE('stop','0','o'),'5','s'),'2','Z') = REPLACE(REPLACE(REPLACE('5t0p','0','o'),'5','s'),'2','Z') then 1 else 2 end

Like previously said, if you have time, clean up the data.
If not, SQL SERVER supplies two string functions that might help.
The example is from my blog article. http://craftydba.com/?p=5211
The SOUNDEX() function turns a word into a 4 character value. The DIFFERENCE() function tells you how close two words are.
Your example seems to be one word. You might want to use a calculated column and index it so that the where clause is SARGABLE.
If you are using paragraphs, use a standard split function to turn your text paragraph into words. Use these functions to search the data. However, this will result in a non-SARGABLE expression.
-- Example returns 4, words are very close
select
soundex('Dog') as word_val1,
soundex('Dogs') as word_val2,
difference('Dog', 'Dogs') as how_close
-- Example returns 0, words are very different
select
soundex('Rattle-Snake') as word_val1,
soundex('Mongoose') as word_val2,
difference('Rattle-Snake', 'Mongoose') as how_close
output:
word_val1 word_val2 how_close
--------- --------- -----------
D200 D200 4
word_val1 word_val2 how_close
--------- --------- -----------
R340 M522 0
Last but not least, you an also look into FULL text indexing for speed. This requires some extra overhead (FTI structure and process to update FTI).
http://craftydba.com/?p=1421

select REPLACE(REPLACE( REPLACE([column_name],'O','0'),'Z','2'),'5','S')
from [table_name]

1) To filter out all rows containing all forms of STOP word (STOP, 5TOP, ST0P, 5T0P) you could use following query based on LIKE:
SELECT *
FROM (
SELECT 1, 'CocoJambo' UNION ALL
SELECT 2, '5T0P' UNION ALL
SELECT 3, ' 5TOP ' UNION ALL
SELECT 4, ' ST0P ' UNION ALL
SELECT 5, ' STOP ' UNION ALL
SELECT 6, 'ZTOP'
) x (ID, ColA)
WHERE x.ColA LIKE '%[5S]T[0O]P%';
Output:
ID ColA
----------- ---------
2 5T0P
3 5TOP
4 ST0P
5 STOP
2) Regarding your question:
For every table
first I would try to build a table with all patterns for every word and for every pattern I would store the proper/accurate word,
then I would try to replace every occurrence of pattern with the proper word
After this prepossessing of these two tables I will try to match both tables.
This script will replace only the first occurrence of pattern
SELECT x.*, oa.*,
CASE
WHEN oa.PatIx > 0 THEN STUFF( x.ColA , oa.PatIx , LEN(oa.Word), oa.Word )
ELSE x.ColA
END AS NewColA
FROM (
SELECT 1, 'CocoJambo' UNION ALL
SELECT 2, '5T0P' UNION ALL
SELECT 3, ' 5TOP ' UNION ALL
SELECT 4, ' ST0P ' UNION ALL
SELECT 5, ' STOP jambo jumbo 5TOP bOb ' UNION ALL
SELECT 6, 'ZTOP'
) x (ID, ColA)
OUTER APPLY (
SELECT *
FROM (
SELECT w.WordPattern, w.Word, PATINDEX( w.WordPattern , x.ColA ) AS PatIx
FROM #Words w
) y
WHERE y.PatIx > 0
) oa
Output:
ID ColA WordPattern Word PatIx NewColA
-- ----------------------------- ------------ ---- ----- ----------------------------
1 CocoJambo %b[o0]% bob 8 CocoJambob
2 5T0P %[5S]T[0O]P% STOP 1 STOP
3 5TOP %[5S]T[0O]P% STOP 2 STOP
4 ST0P %[5S]T[0O]P% STOP 3 STOP
5 STOP jambo jumbo 5TOP bOb %[5S]T[0O]P% STOP 4 STOP jambo jumbo 5TOP bOb
5 STOP jambo jumbo 5TOP bOb %b[o0]% bob 12 STOP jambobjumbo 5TOP bOb
6 ZTOP NULL NULL NULL ZTOP
Note: this solution it's just a proof of concept. It needs development.
Or you could try this solution which replaces all wrong words with the proper form:
CREATE TABLE dbo.Words ( Id INT IDENTITY PRIMARY KEY, WordSource NVARCHAR(50) NOT NULL, Word NVARCHAR(50) NOT NULL );
INSERT dbo.Words ( WordSource , Word ) VALUES ( N'5T0P' , N'STOP' );
INSERT dbo.Words ( WordSource , Word ) VALUES ( N'5TOP' , N'STOP' );
INSERT dbo.Words ( WordSource , Word ) VALUES ( N'ST0P' , N'STOP' );
INSERT dbo.Words ( WordSource , Word ) VALUES ( N'b0b' , N'bob' );
INSERT dbo.Words ( WordSource , Word ) VALUES ( N'bOb' , N'bob' );
GO
CREATE FUNCTION dbo.ReplaceWords (#ColA NVARCHAR(4000), #Num INT)
RETURNS TABLE
AS
RETURN
WITH CteRecursive
AS
(
SELECT w.Id, w.WordSource, w.Word, REPLACE(#ColA, w.WordSource, w.Word) AS NewColA
FROM dbo.Words w
WHERE w.Id = 1
UNION ALL
SELECT w.Id, w.WordSource, w.Word, REPLACE(prev.NewColA, w.WordSource, w.Word) AS NewColA
FROM CteRecursive prev INNER JOIN dbo.Words w ON prev.Id + 1 = w.Id
WHERE prev.Id + 1 <= #Num
)
SELECT r.NewColA
FROM CteRecursive r
WHERE r.Id = #Num
GO
-- Testing
SELECT * FROM dbo.ReplaceWords(N' ST0P jambo 5TOP bOb jumbo ', 5) f;
Output
NewColA
----------------------------
STOP jambo STOP bob jumbo
You can use previous function to replace all wrong words within every table and then you can compare both tables:
DECLARE #Num INT;
SET #Num = (SELECT COUNT(*) FROM dbo.Words);
SELECT x.*, rpl.NewColA
FROM (
SELECT 1, N'CocoJambo' UNION ALL
SELECT 2, N'5T0P' UNION ALL
SELECT 3, N' 5TOP ' UNION ALL
SELECT 4, N' ST0P ' UNION ALL
SELECT 5, N' STOP jambo jumbo 5TOP bOb ' UNION ALL
SELECT 6, N'ZTOP' UNION ALL
SELECT 7, N'' UNION ALL
SELECT 8, NULL
) x (ID, ColA)
OUTER APPLY dbo.ReplaceWords(x.ColA, #Num) rpl
Output:
ID ColA NewColA
-- ----------------------------- ----------------------------
1 CocoJambo CocoJambo
2 5T0P STOP
3 5TOP STOP
4 ST0P STOP
5 STOP jambo jumbo 5TOP bOb STOP jambo jumbo STOP bob
6 ZTOP ZTOP
7
8 NULL NULL

Related

Oracle SQL: Merging multiple columns into 1 with conditions

I am new to SQL and don't really have a lot of experience. I need help on this where I have Table A and I want to write a SQL query to generate the result. Any help would be greatly appreciated! Thanks!
Table A
Name
Capacity A
Capacity B
Capacity C
Plant 1
10
20
Plant 2
10
Result Table
Name
Type
Capacity
Plant 1
A,C
10,20
Plant 2
B
10
I know listagg function might be able to combine few columns into one, but is there anyway for me to generate the additional column 'Type' where its smart enough to know which column I am taking my value from? Preferably without creating any additional views/table.
Use NVL2 (or CASE) and concatenate the columns and trim any excess trailing commas:
SELECT Name,
RTRIM(
NVL2(CapacityA,'A,',NULL)
||NVL2(CapacityB,'B,',NULL)
||NVL2(CapacityC,'C',NULL),
','
) AS type,
RTRIM(
NVL2(CapacityA,CapacityA||',',NULL)
||NVL2(CapacityB,CapacityB||',',NULL)
||NVL2(CapacityC,CapacityC,NULL),
','
) AS capacity
FROM table_name;
Which, for the sample data:
CREATE TABLE table_name (name, capacitya, capacityb, capacityc) AS
SELECT 'Plant1', 10, NULL, 20 FROM DUAL UNION ALL
SELECT 'Plant2', NULL, 10, NULL FROM DUAL;
Outputs:
NAME
TYPE
CAPACITY
Plant1
A,C
10,20
Plant2
B
10
db<>fiddle here
Here's one option:
sample data in lines #1 - 4
temp CTE simply - conditionally - concatenates types and capacities
final query (line #17)
removes double separators (commas) (regexp)
removes superfluous leading/trailing commas (trim)
SQL> with test (name, capa, capb, capc) as
2 (select 'Plant1', 10, null, 20 from dual union all
3 select 'Plant2', null, 10, null from dual
4 ),
5 temp as
6 (select name,
7 --
8 case when capa is not null then 'A' end ||','||
9 case when capb is not null then 'B' end ||','||
10 case when capc is not null then 'C' end as type,
11 --
12 case when capa is not null then capa end ||','||
13 case when capb is not null then capb end ||','||
14 case when capc is not null then capc end as capacity
15 from test
16 )
17 select name,
18 trim(both ',' from regexp_replace(type , ',+', ',')) as type,
19 trim(both ',' from regexp_replace(capacity, ',+', ',')) as capacity
20 from temp;
NAME TYPE CAPACITY
------ ---------- ----------
Plant1 A,C 10,20
Plant2 B 10
SQL>

How to union a hardcoded row after each grouped result

After every group / row i want to insert a hardcoded dummy row with a bunch of 'xxxx' to act a separator.
I would like to use oracle sql to do this query. i can execute it using a loop but i don't want to use plsql.
As the others suggest, it is best to do it on the front end.
However, if you have a burning need to be done as a query, here is how.
Here I did not use the rownum function as you have already done. I assume, your data is returned by a query, and you can replace my table with your query.
I made few more assumptions, as you have data with row numbers in it.
[I am not sure what do you mean by not PL/SQL]
Select Case When MOD(rownm, 2) = 0 then ' '
Else to_char((rownm + 1) / 2) End as rownm,
name, total, column1
From
(
select (rownm * 2 - 1) rownm,name, to_char(total) total ,column1 from t
union
SELECT (rownm * 2) rownm,'XXX' name, 'XXX' total, 'The row act .... ' column1 FROM t
) Q
Order by Q.rownm;
and here is the fiddle
Since you're already grouping the data, it might be easier to use GROUPING SETS instead of a UNION.
Grouping sets let you group by multiple sets of columns, including the same set twice to duplicate rows. Then the GROUP_ID function can be used to determine when the fake values should be used. This code will be a bit smaller than a UNION approach, and should be faster since it doesn't need to reference the table multiple times.
select
case when group_id() = 0 then name else '' end name,
case when group_id() = 0 then sum(some_value) else null end total,
case when group_id() = 1 then 'this rows...' else '' end column1
from
(
select 'jack' name, 22 some_value from dual union all
select 'jack' name, 1 some_value from dual union all
select 'john' name, 44 some_value from dual union all
select 'john' name, 1 some_value from dual union all
select 'harry' name, 1 some_value from dual union all
select 'harry' name, 1 some_value from dual
) raw_data
group by grouping sets (name, name)
order by raw_data.name, group_id();
You can use row generator technique (using CONNECT BY) and then use CASE..WHEN as follows:
SQL> SELECT CASE WHEN L.LVL = 1 THEN T.ROWNM END AS ROWNM,
2 CASE WHEN L.LVL = 1 THEN T.NAME
3 ELSE 'XXX' END AS NAME,
4 CASE WHEN L.LVL = 1 THEN TO_CHAR(T.TOTAL)
5 ELSE 'XXX' END AS TOTAL,
6 CASE WHEN L.LVL = 1 THEN T.COLUMN1
7 ELSE 'This row act as separator..' END AS COLUMN1
8 FROM T CROSS JOIN (
9 SELECT LEVEL AS LVL FROM DUAL CONNECT BY LEVEL <= 2
10 ) L ORDER BY T.ROWNM, L.LVL;
ROWNM NAME TOTAL COLUMN1
---------- ---------- ----- ---------------------------
1 Jack 23
XXX XXX This row act as separator..
2 John 45
XXX XXX This row act as separator..
3 harry 2
XXX XXX This row act as separator..
4 roy 45
XXX XXX This row act as separator..
5 Jacob 26
XXX XXX This row act as separator..
10 rows selected.
SQL>

Delimited Function in SQL to Split Data between semi-colon

I have the data below.
I'm only interested on program B. How do I change it into the table below using SQL syntax?
Below is my syntax but it doesn't give me what I want.
SELECT
SUBSTRING(Program, 0, CHARINDEX(';', Program)),
SUBSTRING(
SUBSTRING(Program, CHARINDEX(';', Program) + 1, LEN(Program)),
0,
CHARINDEX(';', SUBSTRING(Program, CHARINDEX(';', Program) + 1,
LEN(Program)))),
REVERSE(SUBSTRING(REVERSE(Program), 0, CHARINDEX(';', REVERSE(Program)))),
File_Count
FROM DataBase1
WHERE Program LIKE '%B%'
Thanks guys for your help.
Adhi
Try this:
SELECT
CASE WHEN PATINDEX('%B[0-9][0-9]%', Program)>0 THEN SUBSTRING(Program, PATINDEX('%B[0-9][0-9]%', Program) - 1, 4)
WHEN PATINDEX('%B[0-9]%', Program)>0 THEN SUBSTRING(Program, PATINDEX('%B[0-9]%', Program) - 1, 3)
ELSE '' END
FROM DataBase1
First WHEN is responsible for extracting pattern B[0-9][0-9], i.e. when B is followed by two digits, second one is for extracting B followed by one digits. Default is returning empty string, when no match is found. If you are interested in extracting pattern B followed by three digits, you need to add another when (as the first case), enter pattern B[-9][0-9][0-9] instead of B[0-9][0-9] and change last number from 4 to 5 (length of string that is extracted).
PATINDEX returns position where the match is found.
If you use PostgreSql you can try next solution.
First create temp table with data:
CREATE TABLE temp.test AS (
SELECT 'A1, B1' AS program, 1 AS file_count
UNION
SELECT 'B2', 1
UNION
SELECT 'A2, B3', 1
UNION
SELECT 'B4', 1
UNION
SELECT 'A3, B5', 2
UNION
SELECT 'B6', 2
UNION
SELECT 'B7', 2
UNION
SELECT 'B8', 1
UNION
SELECT 'B9', 1
UNION
SELECT 'C1;D1;A4;B10', 1
UNION
SELECT 'C2;D2;B11', 1
UNION
SELECT 'C3,D3,A5,B12', 1
UNION
SELECT 'C4;B14;D4;B11,B13', 1
);
I suggested that in one program cell can contains several B values (last select).
After that use regexp_matches to find all B in cell and select for each file_count value(first inner select) and after that sum by each of program:
SELECT
b_program,
sum(file_count)
FROM (
SELECT
(SELECT regexp_matches(program, 'B\d+')) [1] AS b_program,
file_count
FROM temp.test
WHERE upper(program) LIKE '%B%') bpt
GROUP BY b_program
ORDER BY b_program;

Check palindrome without using string functions with condition

I have a table EmployeeTable.
If I want only that records where employeename have character of 1 to 5
will be palindrome and there also condition like total character is more then 10 then 4 to 8 if character less then 7 then 2 to 5 and if character less then 5 then all char will be checked and there that are palindrome then only display.
Examples :- neen will be display
neetan not selected
kiratitamara will be selected
I try this something on string function like FOR first case like name less then 5 character long
SELECT SUBSTRING(EmployeeName,1,5),* from EmaployeeTable where
REVERSE (SUBSTRING(EmployeeName,1,5))=SUBSTRING(EmployeeName,1,5)
I want to do that without string functions,
Can anyone help me on this?
You need at least SUBSTRING(), I have a solution like this:
(In SQL Server)
DECLARE #txt varchar(max) = 'abcba'
;WITH CTE (cNo, cChar) AS (
SELECT 1, SUBSTRING(#txt, 1, 1)
UNION ALL
SELECT cNo + 1, SUBSTRING(#txt, cNo + 1, 1)
FROM CTE
WHERE SUBSTRING(#txt, cNo + 1, 1) <> ''
)
SELECT COUNT(*)
FROM (
SELECT *, ROW_NUMBER() OVER (ORDER BY cNo DESC) as cRevNo
FROM CTE t1 CROSS JOIN
(SELECT Max(cNo) AS strLength FROM CTE) t2) dt
WHERE
dt.cNo <= dt.strLength / 2
AND
dt.cChar <> (SELECT dti.cChar FROM CTE dti WHERE dti.cNo = cRevNo)
The result will shows the count of differences and 0 means no differences.
Note :
Current solution is Non-Case-Sensitive for change it to a Case-Sensitive you need to check the strings in a case-sensitive collation like Latin1_General_BIN
You can use this solution as a SVF or something like that.
I dont realy understand why you dont want to use string functions in your query, but here is one solution. Compute everything beforehand:
Add Column:
ALTER TABLE EmployeeTable
ADD SubString AS
SUBSTRING(EmployeeName,
(
CASE WHEN LEN(EmployeeName)>10
THEN 4
WHEN LEN(EmployeeName)>7
THEN 2
ELSE 1 END
)
,
(
CASE WHEN LEN(EmployeeName)>10
THEN 8
WHEN LEN(EmployeeName)>7
THEN 5
ELSE 5 END
)
PERSISTED
GO
ALTER TABLE EmployeeTable
ADD Palindrome AS
REVERSE(SUBSTRING(EmployeeName,
(
CASE WHEN LEN(EmployeeName)>10
THEN 4
WHEN LEN(EmployeeName)>7
THEN 2
ELSE 1 END
)
,
(
CASE WHEN LEN(EmployeeName)>10
THEN 8
WHEN LEN(EmployeeName)>7
THEN 5
ELSE 5 END
)) PERSISTED
GO
Then your query will looks like:
SELECT * from EmaployeeTable
where Palindrome = SubString
BUT!
This is not a good idea. Please tell us, why you dont want to use string functios.
You could do it building a list of palindrome words using a recursive query that generates palindrome words till a length o n characters and then selects employees with the name matching a palindrome word. This may be a really inefficient way, but it does the trick
This is a sample query for Oracle, PostgreSQL should support this feature as well with little differences on syntax. I don't know about other RDBMS.
with EmployeeTable AS (
SELECT 'ADA' AS employeename
FROM DUAL
UNION ALL
SELECT 'IDA' AS employeename
FROM DUAL
UNION ALL
SELECT 'JACK' AS employeename
FROM DUAL
), letters as (
select chr(ascii('A') + rownum - 1) as letter
from dual
connect by ascii('A') + rownum - 1 <= ascii('Z')
), palindromes(word, len ) as (
SELECT WORD, LEN
FROM (
select CAST(NULL AS VARCHAR2(100)) as word, 0 as len
from DUAL
union all
select letter as word, 1 as len
from letters
)
union all
select l.letter||p.word||l.letter AS WORD, len + 1 AS LEN
from palindromes p
cross join letters l
where len <= 4
)
SEARCH BREADTH FIRST BY word SET order1
CYCLE word SET is_cycle TO 'Y' DEFAULT 'N'
select *
from EmployeeTable
WHERE employeename IN (
SELECT WORD
FROM palindromes
)
DECLARE #cPalindrome VARCHAR(100) = 'SUBI NO ONIBUS'
SET #cPalindrome = REPLACE(#cPalindrome, ' ', '')
;WITH tPalindromo (iNo) AS (
SELECT 1
WHERE SUBSTRING(#cPalindrome, 1, 1) = SUBSTRING(#cPalindrome, LEN(#cPalindrome), 1)
UNION ALL
SELECT iNo + 1
FROM tPalindromo
WHERE SUBSTRING(#cPalindrome, iNo + 1, 1) = SUBSTRING(#cPalindrome, LEN(#cPalindrome) - iNo, 1)
AND LEN(#cPalindrome) > iNo
)
SELECT IIF(MAX(iNo) = LEN(#cPalindrome), 'PALINDROME', 'NOT PALINDROME')
FROM tPalindromo

SQL 2005 Merge / concatenate multiple rows to one column

We have a bit of a SQL quandry. Say I have a results that look like this...
61E77D90-D53D-4E2E-A09E-9D6F012EB59C | A
61E77D90-D53D-4E2E-A09E-9D6F012EB59C | B
61E77D90-D53D-4E2E-A09E-9D6F012EB59C | C
61E77D90-D53D-4E2E-A09E-9D6F012EB59C | D
7ce953ca-a55b-4c55-a52c-9d6f012ea903 | E
7ce953ca-a55b-4c55-a52c-9d6f012ea903 | F
is there a way I can group these results within SQL to return as
61E77D90-D53D-4E2E-A09E-9D6F012EB59C | A B C D
7ce953ca-a55b-4c55-a52c-9d6f012ea903 | E F
Any ideas people?
Many thanks
Dave
try this:
set nocount on;
declare #t table (id char(36), x char(1))
insert into #t (id, x)
select '61E77D90-D53D-4E2E-A09E-9D6F012EB59C' , 'A' union
select '61E77D90-D53D-4E2E-A09E-9D6F012EB59C' , 'B' union
select '61E77D90-D53D-4E2E-A09E-9D6F012EB59C' , 'C' union
select '61E77D90-D53D-4E2E-A09E-9D6F012EB59C' , 'D' union
select '7ce953ca-a55b-4c55-a52c-9d6f012ea903' , 'E' union
select '7ce953ca-a55b-4c55-a52c-9d6f012ea903' , 'F'
set nocount off
SELECT p1.id,
stuff(
(SELECT
' ' + x
FROM #t p2
WHERE p2.id=p1.id
ORDER BY id, x
FOR XML PATH('')
)
,1,1, ''
) AS YourValues
FROM #t p1
GROUP BY id
OUTPUT:
id YourValues
------------------------------------ --------------
61E77D90-D53D-4E2E-A09E-9D6F012EB59C A B C D
7ce953ca-a55b-4c55-a52c-9d6f012ea903 E F
(2 row(s) affected)
EDIT
based on OP's comment about this needing to run for an existing query, try this:
;WITH YourBugQuery AS
(
--replace this with your own query
select '61E77D90-D53D-4E2E-A09E-9D6F012EB59C' AS ColID , 'A' AS ColX
union select '61E77D90-D53D-4E2E-A09E-9D6F012EB59C' , 'B'
union select '61E77D90-D53D-4E2E-A09E-9D6F012EB59C' , 'C'
union select '61E77D90-D53D-4E2E-A09E-9D6F012EB59C' , 'D'
union select '7ce953ca-a55b-4c55-a52c-9d6f012ea903' , 'E'
union select '7ce953ca-a55b-4c55-a52c-9d6f012ea903' , 'F'
)
SELECT p1.ColID,
stuff(
(SELECT
' ' + ColX
FROM YourBugQuery p2
WHERE p2.ColID=p1.ColID
ORDER BY ColID, ColX
FOR XML PATH('')
)
,1,1, ''
) AS YourValues
FROM YourBugQuery p1
GROUP BY ColID
this has the same results set as displayed above.
I prefer to define a custom user-defined aggregate. Here's an example of a UDA which will accomplish something very close to what you're asking.
Why use a user-defined aggregate instead of a nested SELECT? It's all about performance, and what you are willing to put up with. For a small amount of elements, you can most certainly get away with a nested SELECT, but for large "n", you'll notice that the query plan essentially runs the nested SELECT once for every row in the output list. This can be the kiss of death if you're talking about a large number of rows. With a UDA, it's possible to aggregate these values in a single pass.
The tradeoff, of course, is that the UDA requires you to use the CLR to deploy it, and that's something not a lot of people do often. In Oracle, this particular situation is a bit nicer as you can use PL/SQL directly to create your user-defined aggregate, but I digress...
Another way of doing it is to use the FOR XML PATH option
SELECT
[ID],
(
SELECT
[Value] + ' '
FROM
[YourTable] [YourTable2]
WHERE
[YourTable2].[ID] = [YourTable].[ID]
ORDER BY
[Value]
FOR XML PATH('')
) [Values]
FROM
[YourTable]
GROUP BY
[YourTable].[ID]