How to get query result with stored procedure (convert item quantity from one table into my unit defined in second table) - sql

I have two MSSQL2008 tables like this:
I have problem on the unit conversion logic.
The result I expect like this :
1589 cigar = 1ball, 5slop, 8box, 2pcs
52 pen = 2box, 12pcs
Basically I'm trying to take number (qty) from one table and to convert (split) him into the units which I defined in other table!
Note : Both table are allowed to add new row and new data (dinamic)
How can I get these results through a SQL stored procedure?

i totally misunderstand the question lest time so previous answer is removed (you can see it in edit but it's not relevant for this question)... However i come up with solution that may solve your problem...
NOTE: one little think about this solution, if you enter the value in second table like this
+--------+-------+
| Item | qty |
+--------+-------+
| 'cigar'| 596 |
+--------+-------+
result for this column will be
598cigar = 0ball, 5slop, 8box, 0pcs
note that there is a ball and pcs is there even if their value is 0, that probably can be fix if you don't want to show that value but I let you to play with it...
So let's back to solution and code. Solution have two stored procedures first one is the main and that one is the one you execute. I call it sp_MainProcedureConvertMe. Here is a code for that procedure:
CREATE PROCEDURE sp_MainProcedureConvertMe
AS
DECLARE #srcTable TABLE(srcId INT IDENTITY(1, 1), srcItem VARCHAR(50), srcQty INT)
DECLARE #xTable TABLE(xId INT IDENTITY(1, 1), xVal1 VARCHAR(1000), xVal2 VARCHAR(1000))
DECLARE #maxId INT
DECLARE #start INT = 1
DECLARE #sItem VARCHAR(50)
DECLARE #sQty INT
DECLARE #val1 VARCHAR(1000)
DECLARE #val2 VARCHAR(1000)
INSERT INTO #srcTable (srcItem, srcQty)
SELECT item, qty
FROM t2
SELECT #maxId = (SELECT MAX(srcId) FROM #srcTable)
WHILE #start <= #maxId
BEGIN
SELECT #sItem = (SELECT srcItem FROM #srcTable WHERE srcId = #start)
SELECT #sQty = (SELECT srcQty FROM #srcTable WHERE srcId = #start)
SELECT #val1 = (CAST(#sQty AS VARCHAR) + #sItem)
EXECUTE sp_ConvertMeIntoUnit #sItem, #sQty, #val2 OUTPUT
INSERT INTO #xTable (xVal1, xVal2)
VALUES (#val1, #val2)
SELECT #start = (#start + 1)
CONTINUE
END
SELECT xVal1 + ' = ' + xVal2 FROM #xTable
GO
This stored procedure have two variables as table #srcTable is basically your second table but instead of using id of your table it's create new srcId which goes from 1 to some number and it's auto_increment it's done because of while loop to avoid any problems when there is some deleted values etc. so we wanna be sure that there wont be any skipped number or something like that.
There is few more variables some of them is used to make while loop work other one is to store data. I think it's not hard to figure out from code what are they used for...
While loop iterate throughout all rows from #srcTable take values processing them and insert them into #xTable which basically hold result.
In while loop we execute second stored procedure which have a task to calculate how many unit of something is there in specific number of item. I call her sp_ConvertMeIntoUnit and here is a code for her:
CREATE PROCEDURE sp_ConvertMeIntoUnit
#inItemName VARCHAR(50),
#inQty INT,
#myResult VARCHAR(5000) OUT
AS
DECLARE #rTable TABLE(rId INT IDENTITY(1, 1), rUnit VARCHAR(50), rQty INT)
DECLARE #yTable TABLE(yId INT IDENTITY(1, 1), yVal INT, yRest INT)
DECLARE #maxId INT
DECLARE #start INT = 1
DECLARE #quentity INT = #inQty
DECLARE #divider INT
DECLARE #quant INT
DECLARE #rest INT
DECLARE #result VARCHAR(5000)
INSERT INTO #rTable(rUnit, rQty)
SELECT unit, qty
FROM t1
WHERE item = #inItemName
ORDER BY qty DESC
SELECT #maxId = (SELECT MAX(rId) FROM #rTable)
WHILE #start <= #maxId
BEGIN
SELECT #divider = (SELECT rQty FROM #rTable WHERE rId = #start)
SELECT #quant = (#quentity / #divider)
SELECT #rest = (#quentity % #divider)
INSERT INTO #yTable(yVal, yRest)
VALUES (#quant, #rest)
SELECT #quentity = #rest
SELECT #start = (#start + 1)
CONTINUE
END
SELECT #result = COALESCE(#result + ', ', '') + CAST(y.yVal AS VARCHAR) + r.rUnit FROM #rTable AS r INNER JOIN #yTable AS y ON r.rId = y.yId
SELECT #myResult = #result
GO
This procedure contain three parametars it's take two parameters from the first one and one is returned as result (OUTPUT). In parameters are Item and Quantity.
There are also two variables as table #rTable we stored values as #rId which is auto increment and always will go from 1 to some number no matter what is there Id's in the first table. Other two values are inserted there from the first table based on #inItemName parameter which is sanded from first procedure... From the your first table we use unit and quantity and stored them with rId into table #rTable ordered by Qty from biggest number to lowest. This is a part of code for that
INSERT INTO #rTable(rUnit, rQty)
SELECT unit, qty
FROM t1
WHERE item = #inItemName
ORDER BY qty DESC
Then we go into while loop where we do some maths. Basically we store into variable #divider values from #rTable. In the first iteration we take the biggest value calculate how many times it's contain into the number (second parameter we pass from first procedure is qty from the yours second table) and store it into #quant than we also calculate modulo and store it into variable #rest. This line
SELECT #rest = (#quentity % #divider)
After that we insert our values into #yTable. Before we and with iteration in while loop we assign #quentity variable value of #rest value because we need to work just with the remainder not with whole quantity any more. In second iteration we take next (the second greatest number in our #rTable) number and procedure repeat itself...
When while loop finish we create a string. This line here:
SELECT #result = COALESCE(#result + ', ', '') + CAST(y.yVal AS VARCHAR) + r.rUnit FROM #rTable AS r INNER JOIN #yTable AS y ON r.rId = y.yId
This is the line you want to change if you want to exclude result with 0 (i talk about them at the beginning of answer)...
And at the end we store result into output variable #myResult...
Result of this stored procedure will return string like this:
+--------------------------+
| 1ball, 5slop, 8box, 2pcs |
+--------------------------+
Hope I didn't miss anything important. Basically only think you should change here is the name of the table and their columns (if they are different) in first stored procedure instead t2 here
INSERT INTO...
SELECT item, qty
FROM t2
And in second one instead of t1 (and column if needed) here..
INSERT INTO...
SELECT unit, qty
FROM t1
WHERE item = #inItemName
ORDER BY qty DESC
Hope i help a little or give you an idea how this can be solved...
GL!

You seem to want string aggregation – something that does not have a simple instruction in Transact-SQL and is usually implemented using a correlated FOR XML subquery.
You have not provided names for your tables. For the purpose of the following example, the first table is called ItemDetails and the second one, Items:
SELECT
i.item,
i.qty,
details = (
SELECT
', ' + CAST(d.qty AS varchar(10)) + d.unit
FROM
dbo.ItemDetails AS d
WHERE
d.item = i.item
FOR XML
PATH (''), TYPE
).value('substring(./text()[1], 3)', 'nvarchar(max)')
FROM
dbo.Items AS i
;
For the input provided in the question, the above query would return the following output:
item qty details
----- ----------- ------------------------------
cigar 1598 1pcs, 1000ball, 12box, 100slop
pen 52 1pcs, 20box
You can further arrange the data into strings as per your requirement. I would recommend you do it in the calling application and use SQL only as your data source. However, if you must, you can do the concatenation in SQL as well.
Note that the above query assumes that the same unit does not appear more than once per item in ItemDetails. If it does and you want to aggregate qty values per unit before producing the detail line, you will need to change the query a little:
SELECT
i.item,
i.qty,
details = (
SELECT
', ' + CAST(SUM(d.qty) AS varchar(10)) + d.unit
FROM
dbo.ItemDetails AS d
WHERE
d.item = i.item
GROUP BY
d.unit
FOR XML
PATH (''), TYPE
).value('substring(./text()[1], 3)', 'nvarchar(max)')
FROM
dbo.Items AS i
;

Related

How to retrieve values from TempTable and set that values in local variables

I have this code:
DECLARE #TotalPayment DECIMAL(18,4)
DECLARE #GetTotalPaymentAmount AS TABLE
(
Amount DECIMAL(18,4),
CurrencyId CHAR(3)
)
INSERT INTO #GetTotalPaymentAmount
SELECT SUM(Amount), CurrencyId
FROM [dbo].[fn_DepositWithdrawReport]()
WHERE OperationTypeId = 2
GROUP BY CurrencyId
SET #TotalPayment = (SELECT Amount FROM #GetTotalPaymentAmount)
I am getting this error
Subquery returned more than 1 value.
So yes, I know that the issue in SET logic because #GetTotalPayment returning more than one row. If I am using for example TOP 1, it is working great, but I need all values of that table. How could I get all values and assign them to local variables from that table?
I am getting table like this
A 'C
---'---
10 'USD
20 'EURO
'
and I need to retrieve all of these values.
Please note that I do not know how many rows will be returned from temp table and saying just declare second variable won't work. The whole point of this would be eventually pass that variables to function as input parameter.
Here, I modified your code slightly. Should work :)
DECLARE #TotalPayment DECIMAL(18,4)
DECLARE #GetTotalPaymentAmount AS TABLE
(
Id int identity(1,1),--added Id column
Amount DECIMAL(18,4),
CurrencyId CHAR(3)
)
INSERT INTO #GetTotalPaymentAmount
SELECT SUM(Amount),CurrencyId
FROM [dbo].[fn_DepositWithdrawReport]()
WHERE OperationTypeId = 2
GROUP BY CurrencyId
declare #i int, #cnt int
set #i = 1
select #cnt = COUNT(*) from #GetTotalPaymentAmount
while #i <= #cnt
begin
select #TotalPayment = Amount from #GetTotalPaymentAmount where Id = #i
--do stuff with retrieved value
#i += 1
end
#So_Op
If you want the data to use in a SCALAR function then just do this
SELECT
G.Amount
,dbo.FN_ScalarFunction(G.Amount)
FROM
#GetTotalPaymentAmount G
If it's a TABLE Function then this works
SELECT
G.Amount
,F.ReturnValue
FROM
#GetTotalPaymentAmount G
CROSS APPLY
dbo.FN_TableFunction(G.Amount) F

Repeat query if no results came up

Could someone please advise on how to repeat the query if it returned no results. I am trying to generate a random person out of the DB using RAND, but only if that number was not used previously (that info is stored in the column "allready_drawn").
At this point when the query comes over the number that was drawn before, because of the second condition "is null" it does not display a result.
I would need for query to re-run once again until it comes up with a number.
DECLARE #min INTEGER;
DECLARE #max INTEGER;
set #min = (select top 1 id from [dbo].[persons] where sector = 8 order by id ASC);
set #max = (select top 1 id from [dbo].[persons] where sector = 8 order by id DESC);
select
ordial,
name_surname
from [dbo].[persons]
where id = ROUND(((#max - #min) * RAND() + #min), 0) and allready_drawn is NULL
The results (two possible outcomes):
Any suggestion is appreciated and I would like to thank everyone in advance.
Just try this to remove the "id" filter so you only have to run it once
select TOP 1
ordial,
name_surname
from [dbo].[persons]
where allready_drawn is NULL
ORDER BY NEWID()
#gbn that's a correct solution, but it's possible it's too expensive. For very large tables with dense keys, randomly picking a key value between the min and max and re-picking until you find a match is also fair, and cheaper than sorting the whole table.
Also there's a bug in the original post, as the min and max rows will be selected only half as often as the others, as each maps to a smaller interval. To fix generate a random number from #min to #max + 1, and truncate, rather than round. That way you map the interval [N,N+1) to N, ensuring a fair chance for each N.
For this selection method, here's how to repeat until you find a match.
--drop table persons
go
create table persons(id int, ordial int, name_surname varchar(2000), sector int, allready_drawn bit)
insert into persons(id,ordial,name_surname,sector, allready_drawn)
values (1,1,'foo',8,null),(2,2,'foo2',8,null),(100,100,'foo100',8,null)
go
declare #min int = (select top 1 id from [dbo].[persons] where sector = 8 order by id ASC);
declare #max int = 1+ (select top 1 id from [dbo].[persons] where sector = 8 order by id DESC);
set nocount on
declare #results table(ordial int, name_surname varchar(2000))
declare #i int = 0
declare #selected bit = 0
while #selected = 0
begin
set #i += 1
insert into #results(ordial,name_surname)
select
ordial,
name_surname
from [dbo].[persons]
where id = ROUND(((#max - #min) * RAND() + #min), 0, 1) and allready_drawn is NULL
if ##ROWCOUNT > 0
begin
select *, #i tries from #results
set #selected = 1
end
end

Slow MSSQL stored procedure in processes excel files with only 30,000 rows

I have a web applciation with an iterface that users can uplaod files on. The data form the excel file is collected, concatenated and passed to
a stored procedure which process and returns data.
A brief explanation of the stored procedure.
The stored Procedure collects the string, break it down using a delimeter and stores it in a temp variable table.
Another process is run trough the temp table, where a count is done to find the exact match count and approximate match count by comparing each string
agains a view which contains
all the names to compare against for each row in the first
An exact match count is where the eact string is found in the view for example.. (Bobby Bolonski )
An approximate match is done using a levenshtein distance algorithm database function with a frequency of 2.
temo table #temp1.
The result (name, exactmatch count and approximate match count) are stored in the final temp table.
a select statement is run on the last temp table to return all the data to the application..
MY problem is that, when i passed huge files like and excel file with 27000 names. IT took like 2 hours to process and return data from the database.
I have checked both servers where the application is on and where the database is on.
On the application server. Both memory and cpu usage are less than 15 %
On the database server. both memory and cpu usage are also less than 15 %.
Am looking for advice on what improvements i can do to make the process faster.
Below is the copy of the stored procedure as it is doing all the work and returning the results to the web application.
CREATE PROCEDURE [dbo].[FindMatch]
#fullname varchar(max),#frequency int,
#delimeter varchar(max) AS
set #frequency = 2
declare #transID bigint
SELECT #transID = ABS(CAST(CAST(NEWID() AS VARBINARY(5)) AS Bigint))
DECLARE #exactMatch int = 99
DECLARE #approximateMatch int = 99
declare #name varchar(50)
DECLARE #TEMP1 TABLE (fullname varchar(max),approxMatch varchar(max), exactmatch varchar(max))
DECLARE #ID varchar(max)
--declare a temp table
DECLARE #TEMP TABLE (ID int ,fullname varchar(max),approxMatch varchar(max), exactmatch varchar(max))
--split and store the result in the #temp table
insert into #TEMP (ID,fullname) select * from fnSplitTest(#fullname, #delimeter)
--loop trough the #temp table
WHILE EXISTS (SELECT ID FROM #TEMP)
BEGIN
SELECT Top 1 #ID = ID FROM #TEMP
select #name = fullname from #TEMP where id = #ID
--get the exact match count of the first row from the #temp table and so on until the loop ends
select #exactMatch = count(1) from getalldata where replace(name,',','') COLLATE Latin1_general_CI_AI = #name COLLATE Latin1_general_CI_AI
--declare temp #TEMP3
DECLARE #TEMP3 TABLE (name varchar(max))
--insert into #temp 3 only the data that are similar to our search name so as not to loop over all the data in the view
INSERT INTO #TEMP3(name)
select name from getalldata where SOUNDEX(name) LIKE SOUNDEX(#name)
--get the approximate count using the [DEMLEV] function.
--this function uses the Damerau levenshtein distance algorithm to calculate the distinct between the search string
--and the names inserted into #temp3 above. Uses frequency 2 so as to eliminate all the others
select #approximateMatch = count(1) from #TEMP3 where
dbo.[DamLev](replace(name,',',''),#name,#frequency) <= #frequency and
dbo.[DamLev](replace(name,',',''),#name,#frequency) > 0 and name != #name
--insert into #temp1 at end of every loop results
insert into #TEMP1 (fullname,approxMatch, exactmatch) values(#name,#approximateMatch,#exactMatch)
insert into FileUploadNameInsert (name) values (#name + ' ' +cast(#approximateMatch as varchar) + ' ' + cast(#exactMatch as varchar) + ', ' + cast(#transID as varchar) )
DELETE FROM #TEMP WHERE ID= #ID
delete from #TEMP3
END
--Return all the data stored in #temp3
select fullname,exactmatch,approxMatch, #transID as transactionID from #TEMP1
GO
In my opinion,
Use Openrowset to directly read the records into a pre-defined, properly indexed table of your database.
Now, perform your operations using this table at back-end using pre-defined Stored Procedures.
It should take around 15 minutes for 30,000 rows.

SQL server - Split and sum of a single cell

I have a table cell of type nvarchar(max) that typically looks like this:
A03 B32 Y660 P02
e.g. a letter followed by a number, separated by spaces. What I want to do is get a sum of all those numbers in a SQL procedure. Something rather simple in other languages, but I am fairly new to SQL and besides it seems to me like a rather clumsy language to play around with strings.
Aaanyway, I imagine it would go like this:
1) Create a temporary table and fill it using a split function
2) Strip the first character of every cell
3) Convert the data to int
4) Update target table.column set to sum of said temporary table.
So I got as far as this:
CREATE PROCEDURE [dbo].[SumCell] #delimited nvarchar(max), #row int
AS
BEGIN
declare #t table(data nvarchar(max))
declare #xml xml
set #xml = N'<root><r>' + replace(#delimited,' ','</r><r>') + '</r></root>'
insert into #t(data)
select
r.value('.','varchar(5)') as item
from #xml.nodes('//root/r') as records(r)
UPDATE TargetTable
SET TargetCell = SUM(#t.data) WHERE id = #row
END
Obviously, the first char stripping and conversion to int part is missing and on top of that, I get a "must declare the scalar variable #t" error...
Question is not very clear so assuming your text is in a single cell like A3 B32 Y660 P20 following snippet can be used to get the sum.
DECLARE #Cell NVARCHAR(400), #Sum INT, #CharIndex INT
SELECT #Cell = 'A3 B32 Y660 P20',#Sum=0
WHILE (LEN(LTRIM(#Cell))>0)
BEGIN
SELECT #CharIndex = CHARINDEX(' ',#Cell,0)
SELECT #Sum = #Sum +
SUBSTRING(#Cell,2,CASE WHEN #CharIndex>2 THEN #CharIndex-2 ELSE LEN(#Cell)-1 END )
SELECT #Cell = SUBSTRING(#Cell,#CharIndex+1,LEN(#Cell))
IF NOT (#CharIndex >0) BREAK;
END
--#Sum has the total of cell numbers
SELECT #Sum
I'm making the assumption that you really want to be able to find the sum of values in your delimited list for a full selection of a table. Therefore, I believe the most complicated part of your question is to split the values. The method I tend to use requires a numbers table, So I'll start with that:
--If you really want to use a temporary numbers table don't use this method!
create table #numbers(
Number int identity(1,1) primary key
)
declare #counter int
set #counter = 1
while #counter<=10000
begin
insert into #numbers default values
set #counter = #counter + 1
end
I'll also create some test data
create table #data(
id int identity(1,1),
cell nvarchar(max)
)
insert into #data(cell) values('A03 B32 Y660 P02')
insert into #data(cell) values('Y72 A12 P220 B42')
Then, I'd put the split functionality into a CTE to keep things clean:
;with split as (
select d.id,
[valOrder] = row_number() over(partition by d.cell order by n.Number),
[fullVal] = substring(d.cell, n.Number, charindex(' ',d.cell+' ',n.Number) - n.Number),
[char] = substring(d.cell, n.Number, 1),
[numStr] = substring(d.cell, n.Number+1, charindex(' ',d.cell+' ',n.Number) - n.Number)
from #data d
join #numbers n on substring(' '+d.cell, n.Number, 1) = ' '
where n.Number <= len(d.cell)+1
)
select id, sum(cast(numStr as int))
from split
group by id

dynamic interval creation in SQL

I have the following problem, that I would like to solve with transact-sql.
I have something like this
Start | End | Item
1 | 5 | A
3 | 8 | B
and I want to create something like
Start | End | Item-Combination
1 | 2 | A
3 | 5 | A-B
6 | 8 | B
For the Item-Combination concatenation I already thought of using the FOR XML statement. But in order to create the different new intervals... I really don't know how to approach it. Any idea?
Thanks.
I had a very similar problem with some computer usage data. I had session data indicating login/logout times. I wanted to find the times (hour of day per day of week) that were the most in demand, that is, the hours where the most users were logged in. I ended up solving the problem client-side using hash tables. For each session, I would increment the bucket for a particular location corresponding to the day of week and hour of day for each day/hour for which the session was active. After examining all sessions the hash table values show the number of logins during each hour for each day of the week.
I think you could do something similar, keeping track of each item seen for each start/end value. You could then reconstruct the table by collapsing adjacent entries that have the same item combination.
And, no, I could not think of a way to solve my problem with SQL either.
This is a fairly typical range-finding problem, with the concatenation thrown in. Not sure if the following fits exactly, but it's a starting point. (Cursors are usually best avoided except in the small set of cases where they are faster than set-based solutions, so before the cursor haters get on me please note I use a cursor here on purpose because this smells to me like a cursor-friendly problem -- I typically avoid them.)
So if I create data like this:
CREATE TABLE [dbo].[sourceValues](
[Start] [int] NOT NULL,
[End] [int] NOT NULL,
[Item] [varchar](100) NOT NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[sourceValues] WITH CHECK ADD CONSTRAINT [End_after_Start] CHECK (([End]>[Start]))
GO
ALTER TABLE [dbo].[sourceValues] CHECK CONSTRAINT [End_after_Start]
GO
declare #i int; set #i = 0;
declare #start int;
declare #end int;
declare #item varchar(100);
while #i < 1000
begin
set #start = ABS( CHECKSUM( newid () ) % 100 ) + 1 ; -- "random" int
set #end = #start + ( ABS( CHECKSUM( newid () ) % 10 ) ) + 2; -- bigger random int
set #item = char( ( ABS( CHECKSUM( newid() ) ) % 5 ) + 65 ); -- random letter A-E
print #start; print #end; print #item;
insert into sourceValues( Start, [End], Item) values ( #start , #end, #item );
set #i += 1;
end
Then I can treat the problem like this: each "Start" AND each "End" value represents a change in the collection of current Items, either adding one or removing one, at a certain time. In the code below I alias that notion as "event," meaning an Add or Remove. Each start or end is like a time, so I use the term "tick." If I make a collection of all the events, ordered by event time (Start AND End), I can iterate through it while keeping a running tally in an in-memory table of all the Items that are in play. Each time the tick value changes, I take a snapshot of that tally:
declare #tick int;
declare #lastTick int;
declare #event varchar(100);
declare #item varchar(100);
declare #concatList varchar(max);
declare #currentItemsList table ( Item varchar(100) );
create table #result ( Start int, [End] int, Items varchar(max) );
declare eventsCursor CURSOR FAST_FORWARD for
select tick, [event], item from (
select start as tick, 'Add' as [event], item from sourceValues as adds
union all
select [end] as tick, 'Remove' as [event], item from sourceValues as removes
) as [events]
order by tick
set #lastTick = 1
open eventsCursor
fetch next from eventsCursor into #tick, #event, #item
while ##FETCH_STATUS = 0
BEGIN
if #tick != #lastTick
begin
set #concatList = ''
select #concatList = #concatlist + case when len( #concatlist ) > 0 then '-' else '' end + Item
from #currentItemsList
insert into #result ( Start, [End], Items ) values ( #lastTick, #tick, #concatList )
end
if #event = 'Add' insert into #currentItemsList ( Item ) values ( #item );
else if #event = 'Remove' delete top ( 1 ) from #currentItemsList where Item = #item;
set #lastTick = #tick;
fetch next from eventsCursor into #tick, #event, #item;
END
close eventsCursor
deallocate eventsCursor
select * from #result order by start
drop table #result
Using a cursor for this special case allows just one "pass" through the data, like a running totals problem. Itzik Ben-Gan has some great examples of this in his SQL 2005 books.
Thanks a lot for all the answers, for the moment I have found a way of doing it. SInce I'm dealing with a datawarehouse, and I have a Time dimension, I could do some joins with Time dimension in the style"inner join DimTime t on t.date between f.start_date and end_date".
It's not very good from the performance point of view, but it seems it's working for me.
I'll give a try to onupdatecascade implementation, to see which suits better for me.
This will exactly emulates and solves the mentioned problem:
-- prepare problem, it can have many rows with overlapping ranges
declare #range table
(
Item char(1) primary key,
[Start] int,
[End] int
)
insert #range select 'A', 1, 5
insert #range select 'B', 3, 8
-- unroll the ranges into helper table
declare #usage table
(
Item char(1),
Number int
)
declare
#Start int,
#End int,
#Item char(1)
declare table_cur cursor local forward_only read_only for
select [Start], [End], Item from #range
open table_cur
fetch next from table_cur into #Start, #End, #Item
while ##fetch_status = 0
begin
with
Num(Pos) as -- generate numbers used
(
select cast(#Start as int)
union all
select cast(Pos + 1 as int) from Num where Pos < #End
)
insert
#usage
select
#Item,
Pos
from
Num
option (maxrecursion 0) -- just in case more than 100
fetch next from table_cur into #Start, #End, #Item
end
close table_cur
deallocate table_cur
-- compile overlaps
;
with
overlaps as
(
select
Number,
(
select
Item + '-'
from
#usage as i
where
o.Number = i.Number
for xml path('')
)
as Items
from
#usage as o
group by
Number
)
select
min(Number) as [Start],
max(Number) as [End],
left(Items, len(Items) - 1) as Items -- beautify
from
overlaps
group by
Items