I have the following problem, that I would like to solve with transact-sql.
I have something like this
Start | End | Item
1 | 5 | A
3 | 8 | B
and I want to create something like
Start | End | Item-Combination
1 | 2 | A
3 | 5 | A-B
6 | 8 | B
For the Item-Combination concatenation I already thought of using the FOR XML statement. But in order to create the different new intervals... I really don't know how to approach it. Any idea?
Thanks.
I had a very similar problem with some computer usage data. I had session data indicating login/logout times. I wanted to find the times (hour of day per day of week) that were the most in demand, that is, the hours where the most users were logged in. I ended up solving the problem client-side using hash tables. For each session, I would increment the bucket for a particular location corresponding to the day of week and hour of day for each day/hour for which the session was active. After examining all sessions the hash table values show the number of logins during each hour for each day of the week.
I think you could do something similar, keeping track of each item seen for each start/end value. You could then reconstruct the table by collapsing adjacent entries that have the same item combination.
And, no, I could not think of a way to solve my problem with SQL either.
This is a fairly typical range-finding problem, with the concatenation thrown in. Not sure if the following fits exactly, but it's a starting point. (Cursors are usually best avoided except in the small set of cases where they are faster than set-based solutions, so before the cursor haters get on me please note I use a cursor here on purpose because this smells to me like a cursor-friendly problem -- I typically avoid them.)
So if I create data like this:
CREATE TABLE [dbo].[sourceValues](
[Start] [int] NOT NULL,
[End] [int] NOT NULL,
[Item] [varchar](100) NOT NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[sourceValues] WITH CHECK ADD CONSTRAINT [End_after_Start] CHECK (([End]>[Start]))
GO
ALTER TABLE [dbo].[sourceValues] CHECK CONSTRAINT [End_after_Start]
GO
declare #i int; set #i = 0;
declare #start int;
declare #end int;
declare #item varchar(100);
while #i < 1000
begin
set #start = ABS( CHECKSUM( newid () ) % 100 ) + 1 ; -- "random" int
set #end = #start + ( ABS( CHECKSUM( newid () ) % 10 ) ) + 2; -- bigger random int
set #item = char( ( ABS( CHECKSUM( newid() ) ) % 5 ) + 65 ); -- random letter A-E
print #start; print #end; print #item;
insert into sourceValues( Start, [End], Item) values ( #start , #end, #item );
set #i += 1;
end
Then I can treat the problem like this: each "Start" AND each "End" value represents a change in the collection of current Items, either adding one or removing one, at a certain time. In the code below I alias that notion as "event," meaning an Add or Remove. Each start or end is like a time, so I use the term "tick." If I make a collection of all the events, ordered by event time (Start AND End), I can iterate through it while keeping a running tally in an in-memory table of all the Items that are in play. Each time the tick value changes, I take a snapshot of that tally:
declare #tick int;
declare #lastTick int;
declare #event varchar(100);
declare #item varchar(100);
declare #concatList varchar(max);
declare #currentItemsList table ( Item varchar(100) );
create table #result ( Start int, [End] int, Items varchar(max) );
declare eventsCursor CURSOR FAST_FORWARD for
select tick, [event], item from (
select start as tick, 'Add' as [event], item from sourceValues as adds
union all
select [end] as tick, 'Remove' as [event], item from sourceValues as removes
) as [events]
order by tick
set #lastTick = 1
open eventsCursor
fetch next from eventsCursor into #tick, #event, #item
while ##FETCH_STATUS = 0
BEGIN
if #tick != #lastTick
begin
set #concatList = ''
select #concatList = #concatlist + case when len( #concatlist ) > 0 then '-' else '' end + Item
from #currentItemsList
insert into #result ( Start, [End], Items ) values ( #lastTick, #tick, #concatList )
end
if #event = 'Add' insert into #currentItemsList ( Item ) values ( #item );
else if #event = 'Remove' delete top ( 1 ) from #currentItemsList where Item = #item;
set #lastTick = #tick;
fetch next from eventsCursor into #tick, #event, #item;
END
close eventsCursor
deallocate eventsCursor
select * from #result order by start
drop table #result
Using a cursor for this special case allows just one "pass" through the data, like a running totals problem. Itzik Ben-Gan has some great examples of this in his SQL 2005 books.
Thanks a lot for all the answers, for the moment I have found a way of doing it. SInce I'm dealing with a datawarehouse, and I have a Time dimension, I could do some joins with Time dimension in the style"inner join DimTime t on t.date between f.start_date and end_date".
It's not very good from the performance point of view, but it seems it's working for me.
I'll give a try to onupdatecascade implementation, to see which suits better for me.
This will exactly emulates and solves the mentioned problem:
-- prepare problem, it can have many rows with overlapping ranges
declare #range table
(
Item char(1) primary key,
[Start] int,
[End] int
)
insert #range select 'A', 1, 5
insert #range select 'B', 3, 8
-- unroll the ranges into helper table
declare #usage table
(
Item char(1),
Number int
)
declare
#Start int,
#End int,
#Item char(1)
declare table_cur cursor local forward_only read_only for
select [Start], [End], Item from #range
open table_cur
fetch next from table_cur into #Start, #End, #Item
while ##fetch_status = 0
begin
with
Num(Pos) as -- generate numbers used
(
select cast(#Start as int)
union all
select cast(Pos + 1 as int) from Num where Pos < #End
)
insert
#usage
select
#Item,
Pos
from
Num
option (maxrecursion 0) -- just in case more than 100
fetch next from table_cur into #Start, #End, #Item
end
close table_cur
deallocate table_cur
-- compile overlaps
;
with
overlaps as
(
select
Number,
(
select
Item + '-'
from
#usage as i
where
o.Number = i.Number
for xml path('')
)
as Items
from
#usage as o
group by
Number
)
select
min(Number) as [Start],
max(Number) as [End],
left(Items, len(Items) - 1) as Items -- beautify
from
overlaps
group by
Items
Related
Similar questions have already appeard here, but it seems like I'm doing the same as in other instructions, but it doesn't work. So I have
Declare #Counter Int
Set #Counter = 1
while #Counter <= 1000
Begin
insert into Kiso_task_table ([Numbers],[Square_root])
values ( #Counter, Sqrt(#Counter));
Set #Counter = #Counter + 1;
CONTINUE;
End
SELECT TOP (1000) [Numbers],[Square_root]
FROM [Kiso_task].[dbo].[Kiso_task_table]
and it should give me Numbers from 1 to 1000 and their square roots, respectively - instead it produces "1" all the times? Do you know what is wrong?
Your error is in the type of variable to convert the square root, it must be of the 'float' type
CREATE TABLE #Kiso_task_table
(
[Numbers] INT,
[Square_root] FLOAT,
);
GO
DECLARE #Counter INT
SET #Counter = 1
WHILE #Counter <= 1000
BEGIN
INSERT INTO #Kiso_task_table ([Numbers],[Square_root]) VALUES ( #Counter, Sqrt(#Counter));
SET #Counter = #Counter + 1
CONTINUE
END
SELECT TOP (1000) [Numbers],[Square_root]
FROM #Kiso_task_table
SQRT (Transact-SQL)
Your approach is procedural thinking. Try to start thinking set-based. That means: No CURSOR, no WHILE, no loopings, no do this and then this and finally this. Let the engine know, what you want to get back and let the engine decide how to do this.
DECLARE #mockupTarget TABLE(Number INT, Square_root DECIMAL(12,8));
WITH TallyOnTheFly(Number) AS
(
SELECT TOP 1000 ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) FROM master..spt_values
)
INSERT INTO #mockupTarget(Number,Square_root)
SELECT Number,SQRT(Number)
FROM TallyOnTheFly;
SELECT * FROM #mockupTarget ORDER BY Number;
The tally-cte will create a volatile set of 1000 numbers. This is inserted in one single statement.
Btw
I tested your code against my mockup table and it was working just fine...
Could someone please advise on how to repeat the query if it returned no results. I am trying to generate a random person out of the DB using RAND, but only if that number was not used previously (that info is stored in the column "allready_drawn").
At this point when the query comes over the number that was drawn before, because of the second condition "is null" it does not display a result.
I would need for query to re-run once again until it comes up with a number.
DECLARE #min INTEGER;
DECLARE #max INTEGER;
set #min = (select top 1 id from [dbo].[persons] where sector = 8 order by id ASC);
set #max = (select top 1 id from [dbo].[persons] where sector = 8 order by id DESC);
select
ordial,
name_surname
from [dbo].[persons]
where id = ROUND(((#max - #min) * RAND() + #min), 0) and allready_drawn is NULL
The results (two possible outcomes):
Any suggestion is appreciated and I would like to thank everyone in advance.
Just try this to remove the "id" filter so you only have to run it once
select TOP 1
ordial,
name_surname
from [dbo].[persons]
where allready_drawn is NULL
ORDER BY NEWID()
#gbn that's a correct solution, but it's possible it's too expensive. For very large tables with dense keys, randomly picking a key value between the min and max and re-picking until you find a match is also fair, and cheaper than sorting the whole table.
Also there's a bug in the original post, as the min and max rows will be selected only half as often as the others, as each maps to a smaller interval. To fix generate a random number from #min to #max + 1, and truncate, rather than round. That way you map the interval [N,N+1) to N, ensuring a fair chance for each N.
For this selection method, here's how to repeat until you find a match.
--drop table persons
go
create table persons(id int, ordial int, name_surname varchar(2000), sector int, allready_drawn bit)
insert into persons(id,ordial,name_surname,sector, allready_drawn)
values (1,1,'foo',8,null),(2,2,'foo2',8,null),(100,100,'foo100',8,null)
go
declare #min int = (select top 1 id from [dbo].[persons] where sector = 8 order by id ASC);
declare #max int = 1+ (select top 1 id from [dbo].[persons] where sector = 8 order by id DESC);
set nocount on
declare #results table(ordial int, name_surname varchar(2000))
declare #i int = 0
declare #selected bit = 0
while #selected = 0
begin
set #i += 1
insert into #results(ordial,name_surname)
select
ordial,
name_surname
from [dbo].[persons]
where id = ROUND(((#max - #min) * RAND() + #min), 0, 1) and allready_drawn is NULL
if ##ROWCOUNT > 0
begin
select *, #i tries from #results
set #selected = 1
end
end
I have two MSSQL2008 tables like this:
I have problem on the unit conversion logic.
The result I expect like this :
1589 cigar = 1ball, 5slop, 8box, 2pcs
52 pen = 2box, 12pcs
Basically I'm trying to take number (qty) from one table and to convert (split) him into the units which I defined in other table!
Note : Both table are allowed to add new row and new data (dinamic)
How can I get these results through a SQL stored procedure?
i totally misunderstand the question lest time so previous answer is removed (you can see it in edit but it's not relevant for this question)... However i come up with solution that may solve your problem...
NOTE: one little think about this solution, if you enter the value in second table like this
+--------+-------+
| Item | qty |
+--------+-------+
| 'cigar'| 596 |
+--------+-------+
result for this column will be
598cigar = 0ball, 5slop, 8box, 0pcs
note that there is a ball and pcs is there even if their value is 0, that probably can be fix if you don't want to show that value but I let you to play with it...
So let's back to solution and code. Solution have two stored procedures first one is the main and that one is the one you execute. I call it sp_MainProcedureConvertMe. Here is a code for that procedure:
CREATE PROCEDURE sp_MainProcedureConvertMe
AS
DECLARE #srcTable TABLE(srcId INT IDENTITY(1, 1), srcItem VARCHAR(50), srcQty INT)
DECLARE #xTable TABLE(xId INT IDENTITY(1, 1), xVal1 VARCHAR(1000), xVal2 VARCHAR(1000))
DECLARE #maxId INT
DECLARE #start INT = 1
DECLARE #sItem VARCHAR(50)
DECLARE #sQty INT
DECLARE #val1 VARCHAR(1000)
DECLARE #val2 VARCHAR(1000)
INSERT INTO #srcTable (srcItem, srcQty)
SELECT item, qty
FROM t2
SELECT #maxId = (SELECT MAX(srcId) FROM #srcTable)
WHILE #start <= #maxId
BEGIN
SELECT #sItem = (SELECT srcItem FROM #srcTable WHERE srcId = #start)
SELECT #sQty = (SELECT srcQty FROM #srcTable WHERE srcId = #start)
SELECT #val1 = (CAST(#sQty AS VARCHAR) + #sItem)
EXECUTE sp_ConvertMeIntoUnit #sItem, #sQty, #val2 OUTPUT
INSERT INTO #xTable (xVal1, xVal2)
VALUES (#val1, #val2)
SELECT #start = (#start + 1)
CONTINUE
END
SELECT xVal1 + ' = ' + xVal2 FROM #xTable
GO
This stored procedure have two variables as table #srcTable is basically your second table but instead of using id of your table it's create new srcId which goes from 1 to some number and it's auto_increment it's done because of while loop to avoid any problems when there is some deleted values etc. so we wanna be sure that there wont be any skipped number or something like that.
There is few more variables some of them is used to make while loop work other one is to store data. I think it's not hard to figure out from code what are they used for...
While loop iterate throughout all rows from #srcTable take values processing them and insert them into #xTable which basically hold result.
In while loop we execute second stored procedure which have a task to calculate how many unit of something is there in specific number of item. I call her sp_ConvertMeIntoUnit and here is a code for her:
CREATE PROCEDURE sp_ConvertMeIntoUnit
#inItemName VARCHAR(50),
#inQty INT,
#myResult VARCHAR(5000) OUT
AS
DECLARE #rTable TABLE(rId INT IDENTITY(1, 1), rUnit VARCHAR(50), rQty INT)
DECLARE #yTable TABLE(yId INT IDENTITY(1, 1), yVal INT, yRest INT)
DECLARE #maxId INT
DECLARE #start INT = 1
DECLARE #quentity INT = #inQty
DECLARE #divider INT
DECLARE #quant INT
DECLARE #rest INT
DECLARE #result VARCHAR(5000)
INSERT INTO #rTable(rUnit, rQty)
SELECT unit, qty
FROM t1
WHERE item = #inItemName
ORDER BY qty DESC
SELECT #maxId = (SELECT MAX(rId) FROM #rTable)
WHILE #start <= #maxId
BEGIN
SELECT #divider = (SELECT rQty FROM #rTable WHERE rId = #start)
SELECT #quant = (#quentity / #divider)
SELECT #rest = (#quentity % #divider)
INSERT INTO #yTable(yVal, yRest)
VALUES (#quant, #rest)
SELECT #quentity = #rest
SELECT #start = (#start + 1)
CONTINUE
END
SELECT #result = COALESCE(#result + ', ', '') + CAST(y.yVal AS VARCHAR) + r.rUnit FROM #rTable AS r INNER JOIN #yTable AS y ON r.rId = y.yId
SELECT #myResult = #result
GO
This procedure contain three parametars it's take two parameters from the first one and one is returned as result (OUTPUT). In parameters are Item and Quantity.
There are also two variables as table #rTable we stored values as #rId which is auto increment and always will go from 1 to some number no matter what is there Id's in the first table. Other two values are inserted there from the first table based on #inItemName parameter which is sanded from first procedure... From the your first table we use unit and quantity and stored them with rId into table #rTable ordered by Qty from biggest number to lowest. This is a part of code for that
INSERT INTO #rTable(rUnit, rQty)
SELECT unit, qty
FROM t1
WHERE item = #inItemName
ORDER BY qty DESC
Then we go into while loop where we do some maths. Basically we store into variable #divider values from #rTable. In the first iteration we take the biggest value calculate how many times it's contain into the number (second parameter we pass from first procedure is qty from the yours second table) and store it into #quant than we also calculate modulo and store it into variable #rest. This line
SELECT #rest = (#quentity % #divider)
After that we insert our values into #yTable. Before we and with iteration in while loop we assign #quentity variable value of #rest value because we need to work just with the remainder not with whole quantity any more. In second iteration we take next (the second greatest number in our #rTable) number and procedure repeat itself...
When while loop finish we create a string. This line here:
SELECT #result = COALESCE(#result + ', ', '') + CAST(y.yVal AS VARCHAR) + r.rUnit FROM #rTable AS r INNER JOIN #yTable AS y ON r.rId = y.yId
This is the line you want to change if you want to exclude result with 0 (i talk about them at the beginning of answer)...
And at the end we store result into output variable #myResult...
Result of this stored procedure will return string like this:
+--------------------------+
| 1ball, 5slop, 8box, 2pcs |
+--------------------------+
Hope I didn't miss anything important. Basically only think you should change here is the name of the table and their columns (if they are different) in first stored procedure instead t2 here
INSERT INTO...
SELECT item, qty
FROM t2
And in second one instead of t1 (and column if needed) here..
INSERT INTO...
SELECT unit, qty
FROM t1
WHERE item = #inItemName
ORDER BY qty DESC
Hope i help a little or give you an idea how this can be solved...
GL!
You seem to want string aggregation – something that does not have a simple instruction in Transact-SQL and is usually implemented using a correlated FOR XML subquery.
You have not provided names for your tables. For the purpose of the following example, the first table is called ItemDetails and the second one, Items:
SELECT
i.item,
i.qty,
details = (
SELECT
', ' + CAST(d.qty AS varchar(10)) + d.unit
FROM
dbo.ItemDetails AS d
WHERE
d.item = i.item
FOR XML
PATH (''), TYPE
).value('substring(./text()[1], 3)', 'nvarchar(max)')
FROM
dbo.Items AS i
;
For the input provided in the question, the above query would return the following output:
item qty details
----- ----------- ------------------------------
cigar 1598 1pcs, 1000ball, 12box, 100slop
pen 52 1pcs, 20box
You can further arrange the data into strings as per your requirement. I would recommend you do it in the calling application and use SQL only as your data source. However, if you must, you can do the concatenation in SQL as well.
Note that the above query assumes that the same unit does not appear more than once per item in ItemDetails. If it does and you want to aggregate qty values per unit before producing the detail line, you will need to change the query a little:
SELECT
i.item,
i.qty,
details = (
SELECT
', ' + CAST(SUM(d.qty) AS varchar(10)) + d.unit
FROM
dbo.ItemDetails AS d
WHERE
d.item = i.item
GROUP BY
d.unit
FOR XML
PATH (''), TYPE
).value('substring(./text()[1], 3)', 'nvarchar(max)')
FROM
dbo.Items AS i
;
I am using MS SQL Server 2005 at work to build a database. I have been told that most tables will hold 1,000,000 to 500,000,000 rows of data in the near future after it is built... I have not worked with datasets this large. Most of the time I don't even know what I should be considering to figure out what the best answer might be for ways to set up schema, queries, stuff.
So... I need to know the start and end dates for something and a value that is associated with in ID during that time frame. SO... we can the table up two different ways:
create table xxx_test2 (id int identity(1,1), groupid int, dt datetime, i int)
create table xxx_test2 (id int identity(1,1), groupid int, start_dt datetime, end_dt datetime, i int)
Which is better? How do I define better? I filled the first table with about 100,000 rows of data and it takes about 10-12 seconds to set up in the format of the second table depending on the query...
select y.groupid,
y.dt as [start],
z.dt as [end],
(case when z.dt is null then 1 else 0 end) as latest,
y.i
from #x as y
outer apply (select top 1 *
from #x as x
where x.groupid = y.groupid and
x.dt > y.dt
order by x.dt asc) as z
or
http://consultingblogs.emc.com/jamiethomson/archive/2005/01/10/t-sql-deriving-start-and-end-date-from-a-single-effective-date.aspx
Buuuuut... with the second table.... to insert a new row, I have to go look and see if there is a previous row and then if so update its end date. So... is it a question of performance when retrieving data vs insert/update things? It seems silly to store that end date twice but maybe...... not? What things should I be looking at?
this is what i used to generate my fake data... if you want to play with it for some reason (if you change the maximum of the random number to something higher it will generate the fake stuff a lot faster):
declare #dt datetime
declare #i int
declare #id int
set #id = 1
declare #rowcount int
set #rowcount = 0
declare #numrows int
while (#rowcount<100000)
begin
set #i = 1
set #dt = getdate()
set #numrows = Cast(((5 + 1) - 1) *
Rand() + 1 As tinyint)
while #i<=#numrows
begin
insert into #x values (#id, dateadd(d,#i,#dt), #i)
set #i = #i + 1
end
set #rowcount = #rowcount + #numrows
set #id = #id + 1
print #rowcount
end
For your purposes, I think option 2 is the way to go for table design. This gives you flexibility, and will save you tons of work.
Having the effective date and end date will allow you to have a query that will only return currently effective data by having this in your where clause:
where sysdate between effectivedate and enddate
You can also then use it to join with other tables in a time-sensitive way.
Provided you set up the key properly and provide the right indexes, performance (on this table at least) should not be a problem.
for anyone who can use LEAD Analytic function of SQL Server 2012 (or Oracle, DB2, ...), retrieving data from the 1st table (that uses only 1 date column) would be much much quicker than without this feature:
select
groupid,
dt "start",
lead(dt) over (partition by groupid order by dt) "end",
case when lead(dt) over (partition by groupid order by dt) is null
then 1 else 0 end "latest",
i
from x
I have a table structure that contains a identifier column and a column that contains a deliminated string. What I would like to achieve is to insert the deliminated string into a new table as individual records for each of the values in the split deliminated string.
My table structure for the source table is as follows:
CREATE TABLE tablea(personID VARCHAR(8), delimStr VARCHAR(100))
Some sample data:
INSERT INTO tablea (personID, delimStr) VALUES ('A001','Monday, Tuesday')
INSERT INTO tablea (personID, delimStr) VALUES ('A002','Monday, Tuesday, Wednesday')
INSERT INTO tablea (personID, delimStr) VALUES ('A003','Monday')
My destination table is as follows:
CREATE TABLE tableb(personID VARCHAR(8), dayName VARCHAR(10))
I am attempting to create a Stored Procedure to undertake the insert, my SP so far looks like:
CREATE PROCEDURE getTKWorkingDays
#pos integer = 1
, #previous_pos integer = 0
AS
BEGIN
DECLARE #value varchar(50)
, #string varchar(100)
, #ttk varchar(8)
WHILE #pos > 0
BEGIN
SELECT #ttk = personID
, #string = delimStr
FROM dbo.tablea
SET #pos = CHARINDEX(',', #string, #previous_pos + 1)
IF #pos > 0
BEGIN
SET #value = SUBSTRING(#string, #previous_pos + 1, #pos - #previous_pos - 1)
INSERT INTO dbo.tableb ( personID, dayName ) VALUES ( #ttk, #value )
SET #previous_pos = #pos
END
END
IF #previous_pos < LEN(#string)
BEGIN
SET #value = SUBSTRING(#string, #previous_pos + 1, LEN(#string))
INSERT INTO dbo.tableb ( tkinit, dayName ) VALUES ( #ttk, #value )
END
END
The data that was inserted (only 1 records out of the 170 or so in the original table which after spliting the deliminated string should result in about 600 or so records in the new table), was incorrect.
What I am expecting to see using the sample data above is:
personID dayName
A001 Monday
A001 Tuesday
A002 Monday
A002 Tuesday
A002 Wednesday
A003 Monday
Is anyone able to point out any resources or identify where I am going wrong, and how to make this query work?
The Database is MS SQL Server 2000.
I thank you in advance for any assistance you are able to provide.
Matt
Well your SELECT statement which gets the "next" person doesn't have a WHERE clause, so I'm not sure how SQL Server will know to move to the next person. If this is a one-time task, why not use a cursor?
CREATE TABLE #n(n INT PRIMARY KEY);
INSERT #n(n) SELECT TOP 100 number FROM [master].dbo.spt_values
WHERE number > 0 GROUP BY number ORDER BY number;
DECLARE
#PersonID VARCHAR(8), #delimStr VARCHAR(100),
#str VARCHAR(100), #c CHAR(1);
DECLARE c CURSOR LOCAL FORWARD_ONLY STATIC READ_ONLY
FOR SELECT PersonID, delimStr FROM dbo.tablea;
OPEN c;
FETCH NEXT FROM c INTO #PersonID, #delimStr;
SET #c = ',';
WHILE ##FETCH_STATUS = 0
BEGIN
SELECT #delimStr = #c + #delimStr + #c;
-- INSERT dbo.tableb(tkinit, [dayName])
SELECT #PersonID, LTRIM(SUBSTRING(#delimStr, n+1, CHARINDEX(#c, #delimStr, n+1)-n-1))
FROM #n AS n
WHERE n.n <= LEN(#delimStr) - 1
AND SUBSTRING(#delimStr, n.n, 1) = #c;
FETCH NEXT FROM c INTO #PersonID, #delimStr;
END
CLOSE c;
DEALLOCATE c;
DROP TABLE #n;
If you create a permanent numbers table (with more than 100 rows, obviously) you can use it for many purposes. You could create a split function that allows you to do the above without a cursor (well, without an explicit cursor). But this would probably work best later, when you finally get off of SQL Server 2000. Newer versions of SQL Server have much more flexible and extensible ways of performing splitting and joining.