Alternate approach to WITH CTE and large UNION query - sql

I'd like to rework a script I've been given.
The way it currently works is via a WITH CTE using a large number of UNIONs.
Current setup
We're taking one record from a source table, inserting it into a destination table once with [Name] A then inserting it again with [Name] B. Essentially creating multiple rows in the destination, albeit with different [Name].
An example of one transaction would be to take this row from [Source]:
ID [123] Name [Red and Green]
The results of my current set up in the [Destination] is:
ID [123] Name [Red]
ID [123] Name [Green]
Current logic
Here's a simplified version of the current logic:
WITH CTE
AS
(SELECT ID,
'Red' AS [Name]
FROM [Source_Table]
WHERE [Name] = 'Red and Green'
UNION ALL
SELECT ID,
'Green' AS [Name]
FROM [Source_Table]
WHERE [Name] = 'Red and Green')
INSERT INTO [Destination_Table]
(ID,
[Name])
SELECT ID,
[Name]
FROM CTE;
The reason I'd like to rework this is when we get a new [Name], we have to manually add another portion of code into our (ever increasing) UNION, to make sure it gets picked up.
What I've considered
What I was considering was setting up a WHILE LOOP (or CURSOR) running off a control table, where we could store all of the [Names]. However, I'm not sure if this would be the best approach and I'm not too familiar yet with LOOPS/CURSORS. Also, Wouldn't be too sure of how to stop the loop once all [Name]s had been completed.
Any help much appreciated.

You can use cross apply to duplicate the rows:
insert into [destination_table] (id, name)
select x.*
from source_table s
cross apply (values (id, 'Red'), (id, 'Green')) x(id, name)
where name = 'Red and Green'

Introduce a new table called Color_List which just contains one row for each possible color. Then do this:
with cte as
(
select
st.ID,
c.colorname
from
Source_Table s
inner join
Color_List c
on
CHARINDEX(c.colorname, s.[Name]) > 0
)
insert into Destination_Table
(
ID,
[Name]
)
select
ID,
colorname
from
cte
The benefit of this method is that you aren't hard-coding any color names in the query. All the color names (and presumably there can be many more than two) get maintained in the Color_List table.

You could use string_split to split the values apart. First replace the ' and ' with a pipe '|'. Then do a string split on the vertical pipe.
drop table if exists #tTEST;
go
select * INTO #tTEST from (values
(1, '[123]', 'Name', '[Red and Green]')) V(ID, testCol, nameCol, stringCol);
select ID, testCol, nameCol,
case when left([value], 1)!='[' then concat('[',[value]) else
case when right([value], 1)!=']' then concat([value], ']') else [value] end end valCol
from #tTEST t
cross apply string_split(replace(t.stringCol, ' and ', '|'), '|');
Results
ID testCol nameCol valCol
1 [123] Name [Red]
1 [123] Name [Green]

Related

Join using a LIKE clause is taking too long

Please see the TSQL below:
create table #IDs (id varchar(100))
insert into #IDs values ('123')
insert into #IDs values ('456')
insert into #IDs values ('789')
insert into #IDs values ('1010')
create table #Notes (Note varchar(500))
insert into #Notes values ('Here is a note for 123')
insert into #Notes values ('A note for 789 here')
insert into #Notes values ('456 has a note here')
I want to find all the IDs that are referenced in the #Notes table. This works:
select #IDs.id from #IDs inner join #Notes on #Notes.note like '%' + #IDs.id + '%'
However, there are hundreds of thousands of records in both tables and the query does not complete. I was thinking about FreeText searching, but I don't think it can be applied here. A cursor takes too long to run as well (I think it will take over one month). Is there anything else I can try? I am using SQL Server 2019.
The size of the input is only one aspect of the solution.
By splitting the text to tokens you indeed increase the number of records, but in the same time you enable equality join, which can be implemented using Hash Join.
You should get the query results in a few minutes top, basically the time it takes to your system to do a full scan on both tables, plus some processing time.
No need for temp tables.
No need for indexes.
Select id
from #IDS
where id in (select w.value
from #Notes as n
cross apply string_split(n.Note, ' ') as w
)
Fiddle
Per the OP request -
Here is a code that handles more complicated scenario, where an id could contain various characters (as defined by #token_char) and the separators are potentially all other characters
declare #token_char varchar(100) = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'
;
with cte_notes as
(
select Note
,replace(translate(Note,#token_char,space(len(#token_char))),' ','') as non_token_char
from #Notes
)
select id
from #IDS
where id in
(
select w.value
from cte_notes as n
cross apply string_split(translate(n.Note,n.non_token_char,space(len(n.non_token_char))),' ') as w
where w.value != ''
)
The Fiddle data sample was altered accordingly, to reflect the change
If you are going to do this search often you may want to explore using a wonderful (if underused) feature of SQL Server called 'Full Text Search.' To quote Microsoft:
A LIKE query against millions of rows of text data can take minutes to
return; whereas a full-text query can take only seconds or less
against the same data, depending on the number of rows that are
returned.'
I have seen searches go from minutes to seconds using this feature.
You would need to create a Full Text Search Catalog and then create indexs on the tables you want to search. It's not hard and will take you a few minutes to learn how to do this.
This is a good starting point:
https://learn.microsoft.com/en-us/sql/relational-databases/search/get-started-with-full-text-search?view=sql-server-ver15
I would apply CTE with string_split to filter out all alphabetic components and then join #ID table with the result of the CTE by id column. The query was tested on a sample of 1 mm rows.
With CTE As (
Select T.value As id
From #Notes Cross Apply String_Split(Note,' ') As T
Where Try_Convert(Int, T.value) Is Not Null
)
Select I.id
From #IDs As I Inner Join CTE On (I.id=CTE.id)
If you just want to extract a numeric value from a string, in this case join is excessive.
Select T.value As id, #Notes.Note
From #Notes Cross Apply String_Split(Note,' ') As T
Where Try_Convert(Int, T.value) Is Not Null And T.value Like '%[0-9]%'
id
Note
123
Here is a note for 123
789
A note for 789 here
456
456 has a note here
No matter what, under the given circumstances, I would use join to filter out those numbers that are not represented in #IDs table.
With CTE As (
Select distinct(id) As id
From #IDs
)
Select T.value As id, #Notes.Note
From #Notes Cross Apply String_Split(Note,' ') As T
Inner Join CTE On (T.value=CTE.id)
Where Try_Convert(Int, T.value) Is Not Null
And T.value Like '%[0-9]%'
If the string contains brackets or parenthesis instead of spaces like this:
"456(this is an id number) has a note here" or "456[01/01/2022]"
as last resorts (since it degrades performance) you can use TRANSLATE to replace those brackets with spaces as follows:
With CTE As (
Select distinct(id) As id
From #IDs
)
Select T.value As id, #Notes.Note
From #Notes Cross Apply String_Split(TRANSLATE(Note,'[]()',' '),' ') As T
Inner Join CTE On (T.value=CTE.id)
Where Try_Convert(Int, T.value) Is Not Null
And T.value Like '%[0-9]%'
db<>fiddle

How to select 2 cross split string column in single query

CREATE TABLE #StudentClasses
(
ID INT,
Student VARCHAR(100),
Classes VARCHAR(100),
CCode VARCHAR(30)
)
GO
INSERT INTO #StudentClasses
SELECT 1, 'Mark', 'Maths,Science,English', 'US,UK,AUS'
UNION ALL
SELECT 2, 'John', 'Science,English', 'BE,DE'
UNION ALL
SELECT 3, 'Robert', 'Maths,English', 'CA,IN'
GO
SELECT *
FROM #StudentClasses
GO
SELECT ID, Student, value ,value
FROM #StudentClasses
CROSS APPLY STRING_SPLIT(Classes, ',')
CROSS APPLY STRING_SPLIT(CCode, ',')
This must be put in the very first place: Do not store delimited data! If there is any chance to change your table's design, you should use related side-tables to store data this kind...
Your question is not much better than the one before. Without your expected result any suggestion must be guessing.
What I guess: You want to transform 'Maths,Science,English', 'US,UK,AUS' in a way, that Maths goes along with US, Science along with UK and English matches AUS. Try this
SELECT sc.ID
,sc.Student
,A.[key] AS Position
,A.[value] AS Class
,B.[value] AS CCode
FROM #StudentClasses sc
CROSS APPLY OPENJSON('["' + REPLACE(Classes,',','","') + '"]') A
CROSS APPLY OPENJSON('["' + REPLACE(CCode,',','","') + '"]') B
WHERE A.[key]=B.[key];
You did not tell us your SQL Server's version... But you tagged with Azure. Therefore I assume, that v2016 is okay for you. With a lower version (or a lower compatibility level of the given database) there is no JSON support.
Why JSON at all? This is the best way at the moment to split CSV data and get the fragments together with their position within the array. Regrettfully STRING_SPLIT() does not guarantee to return the expected order. With versions lower than v2016 there are several more or less ugly tricks...
If you need your result side-by-side you should read about conditional aggregation.
use select all or use alias
CREATE TABLE #StudentClasses
(ID INT, Student VARCHAR(100), Classes VARCHAR(100),CCode varchar(30))
INSERT INTO #StudentClasses
SELECT 1, 'Mark', 'Maths,Science,English', 'US,UK,AUS'
UNION ALL
SELECT 2, 'John', 'Science,English', 'BE,DE'
UNION ALL
SELECT 3, 'Robert', 'Maths,English', 'CA,IN'
SELECT *,v1.value as clases,v2.value as codes
FROM #StudentClasses
CROSS APPLY STRING_SPLIT(Classes, ',') v2
CROSS APPLY STRING_SPLIT(CCode,
',') v1

Get rows in which any column contains sequence using exists select 1

I have a table with three columns with the following values (dbFiddle)
C1 C2 C3
----------------------------
Red Yellow Blue
null Red Green
Yellow null Violet
I'm trying to create a query that returns all the rows that contain the value "Yellow" without using IN or OR. If I execute the following query:
SELECT 1
FROM test
WHERE CONCAT(C1, C2, C3) LIKE '%Yellow%'
It correctly returns the rows specified. However, if I try to use this query inside an exists:
SELECT *
FROM test
WHERE EXISTS (SELECT 1 FROM test WHERE CONCAT(C1, C2, C3) LIKE '%Yellow%')
it returns all the rows, not just the two with the "Yellow" word. What am I doing wrong here?
Any help would be greatly appreciated.
Re
SELECT 1 FROM test WHERE CONCAT(C1, C2, C3) LIKE '%Yellow%'
"correctly returns the rows specified"
The select returns a single column, 1, although this is because there is a row which has one column containing Yellow somewhere in its text.
This is because EXISTS:
Returns TRUE if a subquery contains any rows.
i.e. All of the following queries also returned all rows in your test table:
SELECT * FROM test WHERE EXISTS (SELECT 1);
SELECT * FROM test WHERE EXISTS (SELECT 0);
SELECT * FROM test WHERE EXISTS (SELECT NULL);
... simply because the SELECT returns at least one row!
The usual usage of EXISTS also includes correlation of the subquery in the EXISTS back to the outer select.
Example of Correlation
In the below contrived example, we've got 4 people living in two houses. Here we're using EXISTS to figure out the names of the persons who are happy, and also have someone else who is also happy living in the same (correlated) House.
CREATE TABLE House
(
HouseId INT PRIMARY KEY,
Name VARCHAR(MAX)
);
CREATE TABLE Person
(
PersonId INT PRIMARY KEY,
HouseId INT FOREIGN KEY REFERENCES HOUSE(HouseId),
Name VARCHAR(MAX),
IsHappy BIT
);
INSERT INTO House(HouseId, Name) VALUES (1, 'House1'), (2, 'House2');
INSERT INTO Person(PersonId, HouseId, Name, IsHappy) VALUES
(1, 1, 'Joe', 0),
(2, 1, 'Jim', 1),
(3, 2, 'Fred', 1),
(4, 2, 'Mary', 1);
SELECT pOuter.Name
FROM Person pOuter
WHERE pOuter.IsHappy = 1
AND EXISTS
(SELECT 1
FROM Person pInner
WHERE pInner.HouseId = pOuter.HouseId
AND pInner.PersonId != pOuter.PersonId
AND pInner.IsHappy = 1);
Returns
Mary
Fred
(There are obviously other ways to find the same result, e.g. finding groupings of House Id where there exists 2 or more happy people, etc)
exists clause check specifies a subquery to test for the existence of rows.
so exists (SELECT 1 FROM test WHERE CONCAT(C1, C2, C3) LIKE '%Yellow%') will always have rows when any column contain yellow data.
if you want to use exists you need to set inner exists query CONCAT(t.C1, t.C2, t.C3) by the outer table.
SELECT *
FROM test t
where exists (SELECT 1 FROM test WHERE CONCAT(t.C1, t.C2, t.C3) LIKE '%Yellow%')
You don't need to use exists only set the condition on where
SELECT *
FROM test
where CONCAT(C1, C2, C3) LIKE '%Yellow%'
sqlfiddle
I would use cross apply:
SELECT 1
FROM test t CROSS APPLY
(SELECT COUNT(*) as cnt
FROM (VALLUES (C1), (C2), (C3)) V(C)
WHERE c = 'Yellow'
) v
WHERE cnt > 0;
You can readily adapt this to a subquery:
SELECT . . .
FROM test t
WHERE EXISTS (SELECT 1
FROM (VALLUES (C1), (C2), (C3)) V(C)
WHERE c = 'Yellow'
) ;
Personally, I much prefer the direct comparison of each value to 'Yellow' rather than using LIKE. For instance, this will not match "yellow-green" or any other value where "yellow" is part of the name.
And, just for the record, you can still use boolean logic, even if you don't use OR and IN:
where not (coalesce(c1, '') <> 'Yellow' and
coalesce(c2, '') <> 'Yellow' and
coalesce(c3, '') <> 'Yellow'
)
Technically, this is probably the "simplest" solution to your problem. However, I still prefer the apply method, because the intent is clearer.

SQL Server : Columns to Rows

Looking for elegant (or any) solution to convert columns to rows.
Here is an example: I have a table with the following schema:
[ID] [EntityID] [Indicator1] [Indicator2] [Indicator3] ... [Indicator150]
Here is what I want to get as the result:
[ID] [EntityId] [IndicatorName] [IndicatorValue]
And the result values will be:
1 1 'Indicator1' 'Value of Indicator 1 for entity 1'
2 1 'Indicator2' 'Value of Indicator 2 for entity 1'
3 1 'Indicator3' 'Value of Indicator 3 for entity 1'
4 2 'Indicator1' 'Value of Indicator 1 for entity 2'
And so on..
Does this make sense? Do you have any suggestions on where to look and how to get it done in T-SQL?
You can use the UNPIVOT function to convert the columns into rows:
select id, entityId,
indicatorname,
indicatorvalue
from yourtable
unpivot
(
indicatorvalue
for indicatorname in (Indicator1, Indicator2, Indicator3)
) unpiv;
Note, the datatypes of the columns you are unpivoting must be the same so you might have to convert the datatypes prior to applying the unpivot.
You could also use CROSS APPLY with UNION ALL to convert the columns:
select id, entityid,
indicatorname,
indicatorvalue
from yourtable
cross apply
(
select 'Indicator1', Indicator1 union all
select 'Indicator2', Indicator2 union all
select 'Indicator3', Indicator3 union all
select 'Indicator4', Indicator4
) c (indicatorname, indicatorvalue);
Depending on your version of SQL Server you could even use CROSS APPLY with the VALUES clause:
select id, entityid,
indicatorname,
indicatorvalue
from yourtable
cross apply
(
values
('Indicator1', Indicator1),
('Indicator2', Indicator2),
('Indicator3', Indicator3),
('Indicator4', Indicator4)
) c (indicatorname, indicatorvalue);
Finally, if you have 150 columns to unpivot and you don't want to hard-code the entire query, then you could generate the sql statement using dynamic SQL:
DECLARE #colsUnpivot AS NVARCHAR(MAX),
#query AS NVARCHAR(MAX)
select #colsUnpivot
= stuff((select ','+quotename(C.column_name)
from information_schema.columns as C
where C.table_name = 'yourtable' and
C.column_name like 'Indicator%'
for xml path('')), 1, 1, '')
set #query
= 'select id, entityId,
indicatorname,
indicatorvalue
from yourtable
unpivot
(
indicatorvalue
for indicatorname in ('+ #colsunpivot +')
) u'
exec sp_executesql #query;
well If you have 150 columns then I think that UNPIVOT is not an option. So you could use xml trick
;with CTE1 as (
select ID, EntityID, (select t.* for xml raw('row'), type) as Data
from temp1 as t
), CTE2 as (
select
C.id, C.EntityID,
F.C.value('local-name(.)', 'nvarchar(128)') as IndicatorName,
F.C.value('.', 'nvarchar(max)') as IndicatorValue
from CTE1 as c
outer apply c.Data.nodes('row/#*') as F(C)
)
select * from CTE2 where IndicatorName like 'Indicator%'
sql fiddle demo
You could also write dynamic SQL, but I like xml more - for dynamic SQL you have to have permissions to select data directly from table and that's not always an option.
UPDATEAs there a big flame in comments, I think I'll add some pros and cons of xml/dynamic SQL. I'll try to be as objective as I could and not mention elegantness and uglyness. If you got any other pros and cons, edit the answer or write in comments
cons
it's not as fast as dynamic SQL, rough tests gave me that xml is about 2.5 times slower that dynamic (it was one query on ~250000 rows table, so this estimate is no way exact). You could compare it yourself if you want, here's sqlfiddle example, on 100000 rows it was 29s (xml) vs 14s (dynamic);
may be it could be harder to understand for people not familiar with xpath;
pros
it's the same scope as your other queries, and that could be very handy. A few examples come to mind
you could query inserted and deleted tables inside your trigger (not possible with dynamic at all);
user don't have to have permissions on direct select from table. What I mean is if you have stored procedures layer and user have permissions to run sp, but don't have permissions to query tables directly, you still could use this query inside stored procedure;
you could query table variable you have populated in your scope (to pass it inside the dynamic SQL you have to either make it temporary table instead or create type and pass it as a parameter into dynamic SQL;
you can do this query inside the function (scalar or table-valued). It's not possible to use dynamic SQL inside the functions;
Just to help new readers, I've created an example to better understand #bluefeet's answer about UNPIVOT.
SELECT id
,entityId
,indicatorname
,indicatorvalue
FROM (VALUES
(1, 1, 'Value of Indicator 1 for entity 1', 'Value of Indicator 2 for entity 1', 'Value of Indicator 3 for entity 1'),
(2, 1, 'Value of Indicator 1 for entity 2', 'Value of Indicator 2 for entity 2', 'Value of Indicator 3 for entity 2'),
(3, 1, 'Value of Indicator 1 for entity 3', 'Value of Indicator 2 for entity 3', 'Value of Indicator 3 for entity 3'),
(4, 2, 'Value of Indicator 1 for entity 4', 'Value of Indicator 2 for entity 4', 'Value of Indicator 3 for entity 4')
) AS Category(ID, EntityId, Indicator1, Indicator2, Indicator3)
UNPIVOT
(
indicatorvalue
FOR indicatorname IN (Indicator1, Indicator2, Indicator3)
) UNPIV;
Just because I did not see it mentioned.
If 2016+, here is yet another option to dynamically unpivot data without actually using Dynamic SQL.
Example
Declare #YourTable Table ([ID] varchar(50),[Col1] varchar(50),[Col2] varchar(50))
Insert Into #YourTable Values
(1,'A','B')
,(2,'R','C')
,(3,'X','D')
Select A.[ID]
,Item = B.[Key]
,Value = B.[Value]
From #YourTable A
Cross Apply ( Select *
From OpenJson((Select A.* For JSON Path,Without_Array_Wrapper ))
Where [Key] not in ('ID','Other','Columns','ToExclude')
) B
Returns
ID Item Value
1 Col1 A
1 Col2 B
2 Col1 R
2 Col2 C
3 Col1 X
3 Col2 D
I needed a solution to convert columns to rows in Microsoft SQL Server, without knowing the colum names (used in trigger) and without dynamic sql (dynamic sql is too slow for use in a trigger).
I finally found this solution, which works fine:
SELECT
insRowTbl.PK,
insRowTbl.Username,
attr.insRow.value('local-name(.)', 'nvarchar(128)') as FieldName,
attr.insRow.value('.', 'nvarchar(max)') as FieldValue
FROM ( Select
i.ID as PK,
i.LastModifiedBy as Username,
convert(xml, (select i.* for xml raw)) as insRowCol
FROM inserted as i
) as insRowTbl
CROSS APPLY insRowTbl.insRowCol.nodes('/row/#*') as attr(insRow)
As you can see, I convert the row into XML (Subquery select i,* for xml raw, this converts all columns into one xml column)
Then I CROSS APPLY a function to each XML attribute of this column, so that I get one row per attribute.
Overall, this converts columns into rows, without knowing the column names and without using dynamic sql. It is fast enough for my purpose.
(Edit: I just saw Roman Pekar answer above, who is doing the same.
I used the dynamic sql trigger with cursors first, which was 10 to 100 times slower than this solution, but maybe it was caused by the cursor, not by the dynamic sql. Anyway, this solution is very simple an universal, so its definitively an option).
I am leaving this comment at this place, because I want to reference this explanation in my post about the full audit trigger, that you can find here: https://stackoverflow.com/a/43800286/4160788
DECLARE #TableName varchar(max)=NULL
SELECT #TableName=COALESCE(#TableName+',','')+t.TABLE_CATALOG+'.'+ t.TABLE_SCHEMA+'.'+o.Name
FROM sysindexes AS i
INNER JOIN sysobjects AS o ON i.id = o.id
INNER JOIN INFORMATION_SCHEMA.TABLES T ON T.TABLE_NAME=o.name
WHERE i.indid < 2
AND OBJECTPROPERTY(o.id,'IsMSShipped') = 0
AND i.rowcnt >350
AND o.xtype !='TF'
ORDER BY o.name ASC
print #tablename
You can get list of tables which has rowcounts >350 . You can see at the solution list of table as row.
The opposite of this is to flatten a column into a csv eg
SELECT STRING_AGG ([value],',') FROM STRING_SPLIT('Akio,Hiraku,Kazuo', ',')

Select values that don't occur in a table

I'm sure this has been asked somewhere, but I found it difficult to search for.
If I want to get all records where a column value equals one in a list, I'd use the IN operator.
SELECT idSparePart, SparePartName
FROM tabSparePart
WHERE SparePartName IN (
'1234-2043','1237-8026','1238-1036','1238-1039','1223-5172'
)
Suppose this SELECT returns 4 rows although the list has 5 items. How can I select the value that does not occur in the table?
Thanks in advance.
select t.* from (
select '1234-2043' as sparePartName
union select '1237-8026'
union select '1238-1036'
union select '1238-1039'
union select '1223-5172'
) t
where not exists (
select 1 from tabSparePart p WHERE p.SparePartName = t.sparePartName
)
As soon as you mentioned that i have to create a temp table, i remembered my Split-function.
Sorry for answering my own question, but this might be the the best/simplest way for me:
SELECT PartNames.Item
FROM dbo.Split('1234-2043,1237-8026,1238-1036,1238-1039,1223-5172', ',') AS PartNames
LEFT JOIN tabSparePart ON tabSparePart.SparePartName = PartNames.Item
WHERE idSparePart IS NULL
My Split-function:
Help with a sql search query using a comma delimitted parameter
Thank you all anyway.
Update: I misunderstood the question. I guess in that case I would select the values into a temp table, then select the values which are not in that table. Not ideal, I know -- the problem is that you need to get your list of part names to SQL Server somehow (either via IN or putting them in a temp table) but the semantics of IN don't do what you want.
Something like this:
CREATE TABLE tabSparePart
(
SparePartName nvarchar(50)
)
insert into tabSparePart values('1234-2043')
CREATE TABLE #tempSparePartName
(
SparePartName nvarchar(50)
)
insert into #tempSparePartName values('1234-2043')
insert into #tempSparePartName values('1238-1036')
insert into #tempSparePartName values('1237-8026')
select * from #tempSparePartName
where SparePartName not in (select SparePartName from tabSparePart)
With output:
SparePartName
1238-1036
1237-8026
Original (wrong) answer:
You can just use "not in":
SELECT * from tabSparePart WHERE SparePartName NOT in(
'1234-2043','1237-8026','1238-1036','1238-1039','1223-5172'
)
You could try something like this....
declare #test as table
(
items varchar(50)
)
insert into #test
values('1234-2043')
insert into #test
values('1234-2043')
insert into #test
values('1237-8026')
-- the rest of the values --
select * from #test
where items not in (
select theItemId from SparePartName
)
for fun check this out...
http://blogs.microsoft.co.il/blogs/itai/archive/2009/02/01/t-sql-split-function.aspx
It shows you how to take delimited data and return it from a table valued function as separate "rows"... which my make the process of creating the table to select from easier than inserting into a #table or doing a giant select union subquery.