Related
I have two tables as shown here:
I need to insert some data by a stored procedure as below code:
ALTER PROCEDURE [dbo].[DeviceInvoiceInsert]
#dt AS DeviceInvoiceArray READONLY
AS
DECLARE #customerDeviceId BIGINT
DECLARE #customerId BIGINT
DECLARE #filterChangeDate DATE
BEGIN
SET #customerId = (SELECT TOP 1 CustomerId FROM #dt
WHERE CustomerId IS NOT NULL)
SET #filterChangeDate = (SELECT TOP 1 filterChangeDate FROM #dt)
INSERT INTO CustomerDevice (customerId, deviceId, deviceBuyDate, devicePrice)
SELECT customerId, deviceId, deviceBuyDate, devicePrice
FROM #dt
WHERE CustomerId IS NOT NULL
SET #customerDeviceId = SCOPE_IDENTITY()
INSERT INTO FilterChange (customerId, filterId, customerDeviceId, filterChangeDate)
SELECT #customerId, dt.filterId, #customerDeviceId, #filterChangeDate
FROM #dt AS dt
END
The problem is that when the procedure wants to insert data into the FilterChange table, the #customerDeviceId always has the last IDENTITY Id.
How can I figure out this problem?
Update
Thanks for #T N answer but his solution is just to insert one filter per device, so in my case, there can be many filters per device
As mentioned above, using the OUTPUT clause is the best way to capture inserted IDENTITY or other implicitly assigned values. However, you also need to correlate this data with other values from your source table. As far as I know, this cannot be done using a regular INSERT statement, which only allows you to capture data from the target table via the INSERTED pseudo-table.
I am assuming that none of the explicitly inserted values in the first target table can be used to reliably uniquely identify the source record.
A workaround is to use the MERGE statement to perform the insert. The OUTPUT clause may then be used to capture a combination of source and inserted target data.
ALTER PROCEDURE [dbo].[DeviceInvoiceInsert]
#dt AS DeviceInvoiceArray READONLY
AS
BEGIN
-- Temp table to receive captured data from output clause
DECLARE #FilterChangeData TABLE (
customerId INT,
filterId INT,
customerDeviceId INT,
filterChangeDate DATETIME2
)
-- Merge is used instead of a plain INSERT so that we can capture
-- a combination of source and inserted data
MERGE CustomerDevice AS TGT
USING (SELECT * FROM #dt WHERE CustomerId IS NOT NULL) AS SRC
ON 1 = 0 -- Never match
WHEN NOT MATCHED THEN
INSERT (customerId, deviceId, deviceBuyDate, devicePrice)
VALUES (SRC.customerId, SRC.deviceId, SRC.deviceBuyDate, SRC.devicePrice)
OUTPUT SRC.customerId, SRC.filterId, INSERTED.customerDeviceId, SRC.filterChangeDate
INTO #FilterChangeData
;
INSERT INTO FilterChange (customerId, filterId, customerDeviceId, filterChangeDate)
SELECT customerId, filterId, customerDeviceId, filterChangeDate
FROM #FilterChangeData
END
Given the following #dt source data:
customerId
deviceId
deviceBuyDate
devicePrice
filterId
filterChangeDate
11
111
2023-01-01
111.1100
1111
2023-02-01
22
222
2023-01-02
222.2200
2222
2023-02-02
33
333
2023-01-03
333.3300
3333
2023-02-03
11
222
2023-01-04
333.3300
1111
2023-02-04
The following is inserted into CustomerDevice:
customerDeviceId
customerId
deviceId
deviceBuyDate
devicePrice
1
11
111
2023-01-01
111.1100
2
22
222
2023-01-02
222.2200
3
33
333
2023-01-03
333.3300
4
11
222
2023-01-04
333.3300
The following is inserted into FilterChange:
customerId
filterId
customerDeviceId
filterChangeDate
11
1111
1
2023-02-01
22
2222
2
2023-02-02
33
3333
3
2023-02-03
11
1111
4
2023-02-04
See this db<>fiddle.
Thanks to #TN, his solution is just to insert one filter per device, so in my case, there can be many filters per device, So I just manipulate the last part to solve the problem. Also the corrected #dt value is like this:
As you can see, there are many filters per device, and All customers are the same because per invoice belongs to one customer so I had to mark the rest repetitive devices with null to group filters per device.
Here is the corrected code, Thanks by #tn:
-- Example showing MERGE (instead of INSERT) to capture a combination of
-- source and inserted data in an OUTPUT clause.
CREATE TABLE CustomerDevice (
customerDeviceId INT IDENTITY(1,1),
customerId INT,
deviceId INT,
deviceBuyDate DATE,
devicePrice NUMERIC(19,4)
)
CREATE TABLE FilterChange (
customerId INT,
filterId INT,
customerDeviceId INT,
filterChangeDate DATE
)
DECLARE #dt TABLE (
customerId INT,
deviceId INT,
deviceBuyDate DATE,
devicePrice NUMERIC(19,4),
filterId INT,
filterChangeDate DATE
)
INSERT #dt
VALUES
(3, 1, '2023-01-01', 111.11, 1, '2023-02-01'),
(NULL, 1, '2023-01-02', 222.22, 2, '2023-02-02'),
(NULL, 1, '2023-01-03', 333.33, 3, '2023-02-03'),
(NULL, 1, '2023-01-03', 333.33, 4, '2023-02-03'),
(3, 2, '2023-01-04', 333.33, 1, '2023-02-04'),
(NULL, 2, '2023-01-04', 333.33, 2, '2023-02-04'),
(NULL, 2, '2023-01-04', 333.33, 3, '2023-02-04'),
(NULL, 2, '2023-01-04', 333.33, 4, '2023-02-04')
-- Procedure body
DECLARE #customerId BIGINT
SET #customerId = (SELECT TOP 1 CustomerId FROM #dt WHERE CustomerId IS NOT NULL)
DECLARE #FilterChangeData TABLE (
customerId INT,
deviceId INT,
filterId INT,
customerDeviceId INT,
filterChangeDate DATETIME
)
MERGE CustomerDevice AS TGT
USING (SELECT * FROM #dt WHERE CustomerId IS NOT NULL) AS SRC
ON 1 = 0 -- Never match
WHEN NOT MATCHED THEN
INSERT (customerId, deviceId, deviceBuyDate, devicePrice)
VALUES (SRC.customerId, SRC.deviceId, SRC.deviceBuyDate, SRC.devicePrice)
OUTPUT SRC.customerId,SRC.deviceId, SRC.filterId, INSERTED.customerDeviceId, SRC.filterChangeDate
INTO #FilterChangeData;
INSERT INTO FilterChange (customerId, filterId, customerDeviceId, filterChangeDate)
SELECT #customerId, dt.filterId, fcd.customerDeviceId, dt.filterChangeDate
FROM #dt AS dt INNER JOIN #FilterChangeData AS fcd
ON fcd.deviceId = dt.deviceId
-- End procedure body
SELECT * FROM #dt
SELECT * FROM CustomerDevice
SELECT * FROM FilterChange
Result show in, [https://dbfiddle.uk/yf7z_wqr][2]
I have a lengthy query written in SQL that uses CTEs and multiple variables to produce a report of about 1500 customer records with many columns based on a particular date, #ToDate. Some of the tables are ordered CTEs so I only get the latest value based on the #ToDate.
I've omitted specifics but the structure is as follows:
Declare #ToDate date .....
Declare #Category varchar ....;
with cte1 as (select * from table1 where table1.start_date <= #ToDate and (table1.end_date > #ToDate or table1.end_date is null))
,cte2 as (select * from table2 where table2.start_date <= #ToDate and (table2.end_date > #ToDate or table2.end_date is null))
select * from cte1
left join cte2 on cte2.id = cte1.id
where .....
which gives me the following results
|RunDate |CustomerID|DOB |Category|Col5 |Col6 |
|----------|----------|----------|--------|------|------|
|2021-08-30|11111 |2000-01-01|Cat1 | | |
|2021-08-30|22222 |2000-02-02|Cat2 | | |
I'd like to run the same script multiple times but with a different date. So run with #ToDate = '2021-08-30' which gives me one set of results and then every past Monday n number of times which would give me results like this...
|RunDate |CustomerID|DOB |Category|Col5 |Col6
|----------|----------|----------|--------|------|------|
|2021-08-30|11111 |2000-01-01|Cat1 | | |
|2021-08-30|22222 |2000-02-02|Cat2 | | |
|2021-08-23|11111 |2000-01-01|Cat1 | | |
|2021-08-23|22222 |2000-02-02|Cat2 | | |
|2021-08-23|33333 |2000-03-03|Cat9 | | |
I do have a calendar table available so I can easily identify the past n Mondays (or other day I like).
The only variable to change is the #ToDate as this is the Run Date, or As At Date if you will. Essentially I want to run it multiple times for the past few Mondays so I can get what the results were like at 30-08, 23-08, 16-08 etc...
I've never used loops and research suggests I should maybe avoid them or use them as a last resort. I'm not sure on the best approach and if I do use loops, how I wrap it around my query.
Thanks in advance
The question really needs a bit more elaboration but I have give a guess at what you are trying to do with this example.
I have create a Customers and Orders table and then display the results for the date range
I don't think you need to loop with cursors and such as you can get the loop effect by just using the #DateRanges and join on that. it being a CTE or not.
Please let me know if this is not what you meant and I will remove the answer
-- Setup a temp table to hold the dates I want to look for
IF EXISTS (SELECT * FROM tempdb.dbo.sysobjects O WHERE O.xtype in ('U') AND O.id = object_id(N'tempdb..#DateRanges'))
BEGIN
PRINT 'Removing temp table #DateRanges'
DROP TABLE #DateRanges;
END
CREATE TABLE #DateRanges (
[Date] DATE
)
-- Add some dates
INSERT INTO #DateRanges ([Date])
VALUES ('2021-08-30'),
('2021-08-23'),
('2021-08-16')
-- Setup some customers
IF EXISTS (SELECT * FROM tempdb.dbo.sysobjects O WHERE O.xtype in ('U') AND O.id = object_id(N'tempdb..#Customers'))
BEGIN
PRINT 'Removing temp table #Customers'
DROP TABLE #Customers;
END
CREATE TABLE #Customers (
CustomerId BIGINT IDENTITY(1,1) NOT NULL,
[Name] NVARCHAR(50),
DOB DATE NOT NULL,
CONSTRAINT PK_CustomerId PRIMARY KEY (CustomerId)
)
INSERT INTO #Customers ([Name], DOB)
VALUES('Bob', '1989-01-01'),
('Robert', '1994-01-01'),
('Andrew', '1992-01-01');
-- Setup some orders
IF EXISTS (SELECT * FROM tempdb.dbo.sysobjects O WHERE O.xtype in ('U') AND O.id = object_id(N'tempdb..#Order'))
BEGIN
PRINT 'Removing temp table #Order'
DROP TABLE #Order;
END
CREATE TABLE #Order (
OrderId BIGINT IDENTITY(1,1) NOT NULL,
CustomerId BIGINT NOT NULL,
CreatedDate DATE NOT NULL,
Category NVARCHAR(50) NOT NULL,
CONSTRAINT PK_OrderId PRIMARY KEY (OrderId)
)
INSERT INTO #Order(CustomerId, CreatedDate, Category)
VALUES
(1, '2021-08-30', 'Cat1'),
(1, '2021-08-23', 'Cat2'),
(2, '2021-08-30', 'Cat1'),
(2, '2021-08-23', 'Cat2'),
(2, '2021-08-16', 'Cat3'),
(3, '2021-08-30', 'Cat1'),
(3, '2021-08-16', 'Cat2')
-- Using the #DateRanged temp table we can the use this to ge the data we need so no need for a loop
SELECT *
FROM #DateRanges AS DR
LEFT JOIN #Order AS O ON O.
CreatedDate <= DR.[Date] AND O.CreatedDate >= DATEADD(D, -6, DR.[Date])
I have two tables:
A product table that has productID of all products (productName, productID)
A transaction table of records with columns: date (for last 3 months) and productID. All productID should have at least one record for each date.
However, there are missing productID records for some datss, and I would like to find the missing productID and the respective dates they are missing from.
Right now, I'm only able to find missing records for one particular date, which is:
Select productName, productID
from product a inner join transaction b
On a.productid=b.productid
Where a.productid not in
(
Select distinct productid
From transaction
Where date='2019-03-01'
)
I would like to have a table that consolidates all dates and their respective missing productid records. How can I proceed from here?
create table product
(ProductID INT,
ProductName Char (30)
);
insert into product values
(1, 'apple'),
(2, 'orange'),
(3, 'pear');
create table transaction1
(transDate Date,
productID INT,
revenue FLOAT);
insert into transaction1 values
('2019-03-01', 1, 2.5),
('2019-03-01', 2, 4.0),
('2019-03-01', 2, 8.0),
('2019-03-01', 3, 6.0),
('2019-03-02', 1, 7.0),
('2019-03-02', 3, 14.0),
('2019-03-03', 1, 1.5) ;
What i have will be similar to this, where productID=2 is missing from 03-02. I am working for dates from beginning of the year till yesterday, which are all recorded in the transaction table.
#SimonPrice
The most tricky part of your problem is to generate the list of all dates for your expected date which is 90 days in your case. Here I have created all dates using a loop. One additional column 'Common' added with value 1 to Product and Temp table to join with the product table for all date and all product.
Note: Just change the value of #HowManyDay as per days you wants to check.
DECLARE #HowManyDay INT
SET #HowManyDay = 8
DECLARE #LoopCount INT
SET #LoopCount = 0
DECLARE #TempTable TABLE(Common INT,All_Date DATE)
WHILE #LoopCount <= #HowManyDay-1
BEGIN
INSERT INTO #TempTable (Common,All_Date)
VALUES (1,DATEADD(DD,-#LoopCount,GETDATE()))
SET #LoopCount = #LoopCount + 1
END
SELECT B.All_Date,C.productID,C.productName,T.date
FROM #TempTable B
INNER JOIN
(
SELECT 1 [Common],*
FROM product
)C ON B.Common = C.Common
LEFT JOIN transaction_details T ON C.productID = T.productID AND B.All_Date = T.date
WHERE T.date IS NULL
ORDER BY 3 ASC,1 DESC
I have a table containing device movements.
MoveID DeviceID Start End
I want to find out if there is a way to sum up the total movement days for each device to the present. However if there is a gap 6 weeks bewtween an end date and the next start date then the time count is reset.
MoveID DeviceID Start End
1 1 2011-1-1 2011-2-1
2 1 2011-9-1 2011-9-20
3 1 2011-9-25 2011-9-28
The total for device should be 24 days as because there is a gap of greater than 6 weeks. Also I'd like to find out the number of days since the first movement in the group in this case 28 days as the latest count group started on the 2011-9-1
I thought I could do it with a stored proc and a cursor etc (which is not good) just wondered if there was anything better?
Thanks
Graeme
create table #test
(
MoveID int,
DeviceID int,
Start date,
End_time date
)
--drop table #test
insert into #test values
(1,1,'2011-1-1','2011-2-1'),
(2,1,'2011-9-1','2011-9-20'),
(3,1,'2011-9-25','2011-9-28')
select
a.DeviceID,
sum(case when datediff(dd, a.End_time, isnull(b.Start, a.end_time)) > 42 /*6 weeks = 42 days*/ then 0 else datediff(dd,a.Start, a.End_time)+1 /*we will count also the last day*/ end) as movement_days,
sum(case when datediff(dd, a.End_time, isnull(b.Start, a.end_time)) > 42 /6 weeks = 42 days/ then 0 else datediff(dd,a.Start, a.End_time)+1 /we will count also the last day/ end + case when b.MoveID is null then datediff(dd, a.Start, a.End_time) + 1 else 0 end) as total_days
from
#test a
left join #test b
on a.DeviceID = b.DeviceID
and a.MoveID + 1 = b.MoveID
group by
a.DeviceID
Let me know if you need some explanation - there can be more ways to do that...
DECLARE #Times TABLE
(
MoveID INT,
DeviceID INT,
Start DATETIME,
[End] DATETIME
)
INSERT INTO #Times VALUES (1, 1, '1/1/2011', '2/1/2011')
INSERT INTO #Times VALUES (2, 1, '9/1/2011', '9/20/2011')
INSERT INTO #Times VALUES (3, 1, '9/25/2011', '9/28/2011')
INSERT INTO #Times VALUES (4, 2, '1/1/2011', '2/1/2011')
INSERT INTO #Times VALUES (5, 2, '3/1/2011', '4/20/2011')
INSERT INTO #Times VALUES (6, 2, '5/1/2011', '6/20/2011')
DECLARE #MaxGapInWeeks INT
SET #MaxGapInWeeks = 6
SELECT
validTimes.DeviceID,
SUM(DATEDIFF(DAY, validTimes.Start, validTimes.[End]) + 1) AS TotalDays,
DATEDIFF(DAY, MIN(validTimes.Start), MAX(validTimes.[End])) + 1 AS TotalDaysInGroup
FROM
#Times validTimes LEFT JOIN
#Times timeGap
ON timeGap.DeviceID = validTimes.DeviceID
AND timeGap.MoveID <> validTimes.MoveID
AND DATEDIFF(WEEK, validTimes.[End], timeGap.Start) > #MaxGapInWeeks
WHERE timeGap.MoveID IS NULL
GROUP BY validTimes.DeviceID
I have a table which contains an ID and a Date for an event. Each row is for one date. I am trying to determine consecutive date ranges and consolidate output to show the ID,StartDate,EndDate
ID Date
200236 2011-01-02 00:00:00.000
200236 2011-01-03 00:00:00.000
200236 2011-01-05 00:00:00.000
200236 2011-01-06 00:00:00.000
200236 2011-01-07 00:00:00.000
200236 2011-01-08 00:00:00.000
200236 2011-01-09 00:00:00.000
200236 2011-01-10 00:00:00.000
200236 2011-01-11 00:00:00.000
200236 2011-01-12 00:00:00.000
200236 2011-01-13 00:00:00.000
200236 2011-01-15 00:00:00.000
200236 2011-01-16 00:00:00.000
200236 2011-01-17 00:00:00.000
Output would look like:
ID StartDate EndDate
200236 2011-01-02 2011-01-03
200236 2011-01-05 2011-01-13
200236 2011-01-15 2011-01-17
Any thoughts on how to handle this in SQL Server 2000?
SELECT ...
FROM ...
WHERE date_column BETWEEN '2011-01-02' AND '2011-01-15'
perhaps? Reference
Or you can do a sub-query and link the next record using a MAX where date is <= current date:
SELECT id, date, (SELECT MAX(date) FROM mytable WHERE date <= mytable.date) AS nextDate
FROM mytable
Or use:
SELECT TOP 1 date
FROM mytable
WHERE date <= mytable.date AND id <> mytable.id
ORDER BY date
As the sub-query so it grabs the next date in line after the current record.
I've just done this similar thing in SQL Server 2008. I think the following translation will work for SQL Server 2000:
-- Create table variable
DECLARE #StartTable TABLE
(
rowid INT IDENTITY(1,1) NOT NULL,
userid int,
startDate date
)
Insert Into #StartTable(userid, startDate)
--This finds the start dates by finding unmatched values
SELECT t1.ID, t1.[Date]
FROM Example As t1
LEFT OUTER JOIN Example As t2 ON t1.ID=t2.ID
And DateAdd(day, 1, t2.[Date]) = t1.[Date]
WHERE t2.[Date] Is NULL
ORDER BY t1.ID, t1.[Date]
-- Create table variable
DECLARE #EndTable TABLE
(
rowid INT IDENTITY(1,1) NOT NULL,
userid int,
endDate date
)
Insert Into #EndTable(userid, endDate)
--This finds the end dates by getting unmatched values
SELECT t1.ID, t1.[Date]
FROM Example As t1
LEFT OUTER JOIN Example As t2 ON t1.ID=t2.ID
And DateAdd(day, -1, t2.[Date]) = t1.[Date]
WHERE t2.[Date] IS NULL
ORDER BY t1.ID, t1.[Date]
Select eT.userid, startDate, endDate
From #EndTable eT
INNER JOIN #StartTable sT On eT.userid = sT.userid
AND eT.rowid = sT.rowid;
So as you can see, I created two table variables, one for starts and one for ends, by self-joining the table on the date either just prior to or just after the date in the [Date] column. This means that I'm selecting only records that don't have a date prior (so these would be at the beginning of a period) for the Start Table and those that have no date following (so these would be at the end of a period) for the End Table.
When these are inserted into the table variable, they are numbered in sequence because of the Identity column. Then I join the two table variables together. Because they are ordered, the start and end dates should always match up properly.
This solution works for me because I have at most one record per ID per day and I am only interested in days, not hours, etc. Even though it is several steps, I like it because it is conceptually simple and eliminates matched records without having cursors or loops. I hope it will work for you too.
This SO Question might help you. I linked directly to Rob Farley's answer as I feel this is a similar problem.
One approach you can take is to add a field that indicates the next date in the sequence. (Either add it to your current table or use a temporary table, store the underlying data to the temp table and then update the next date in the sequence).
Your starting data structure would look something like this:
ID, PerfDate, NextDate
200236, 2011-01-02, 2011-01-03
200236, 2011-01-03, 2011-01-04
etc.
You can then use a series of correlated subqueries to roll the data up into the desired output:
SELECT ID, StartDate, EndDate
FROM (
SELECT DISTINCT ID, PerfDate AS StartDate,
(SELECT MIN([PerfDate]) FROM [SourceTable] S3
WHERE S3.ID = S1.ID
AND S3.NextDate > S1.PerfDate
AND ISNULL(
(SELECT MIN(PerfDate)
FROM [SourceTable] AS S4
WHERE S4.ID = S1.ID
AND S4.NextDate > S3.NextDate), S3.NextDate + 1) > S3.NextDate) AS EndDate
FROM [SourceTable] S1
WHERE
ISNULL(
(SELECT MAX(NextDate)
FROM [SourceTable] S2
WHERE S2.ID = S1.ID
AND S2.PerfDate < S1.PerfDate), PerfDate -1) < S1.PerfDate)q
ORDER BY q.ID, q.StartDate
This is the way I've done it in the past. It's a two step process:
Build the set of candidate contiguous periods
If there are any overlapping periods, delete all but the longest such period.
Here's a script that shows how it's done. You might be able to pull it off in a single [bug, ugly] query, but trying to do that makes my head hurt. I'm using temp tables as it makes the debugging a whole lot easier.
drop table #source
create table #source
(
id int not null ,
dtCol datetime not null ,
-----------------------------------------------------------------------
-- ASSUMPTION 1: Each date must be unique for a given ID value.
-----------------------------------------------------------------------
unique clustered ( id , dtCol ) ,
-----------------------------------------------------------------------
-- ASSUMPTION 2: The datetime column only represents a day.
-- The value of the time component is always 00:00:00.000
-----------------------------------------------------------------------
check ( dtCol = convert(datetime,convert(varchar,dtCol,112),112) ) ,
)
go
insert #source values(1,'jan 1, 2011')
insert #source values(1,'jan 4, 2011')
insert #source values(1,'jan 5, 2011')
insert #source values(2,'jan 1, 2011')
insert #source values(2,'jan 2, 2011')
insert #source values(2,'jan 3, 2011')
insert #source values(2,'jan 5, 2011')
insert #source values(3,'jan 1, 2011')
insert #source values(4,'jan 1, 2011')
insert #source values(4,'jan 2, 2011')
insert #source values(4,'jan 3, 2011')
insert #source values(4,'jan 4, 2011')
go
insert #source values( 200236 , '2011-01-02')
insert #source values( 200236 , '2011-01-03')
insert #source values( 200236 , '2011-01-05')
insert #source values( 200236 , '2011-01-06')
insert #source values( 200236 , '2011-01-07')
insert #source values( 200236 , '2011-01-08')
insert #source values( 200236 , '2011-01-09')
insert #source values( 200236 , '2011-01-10')
insert #source values( 200236 , '2011-01-11')
insert #source values( 200236 , '2011-01-12')
insert #source values( 200236 , '2011-01-13')
insert #source values( 200236 , '2011-01-15')
insert #source values( 200236 , '2011-01-16')
insert #source values( 200236 , '2011-01-17')
go
drop table #candidate_range
go
create table #candidate_range
(
rowId int not null identity(1,1) ,
id int not null ,
dtFrom datetime not null ,
dtThru datetime not null ,
length as 1+datediff(day,dtFrom,dtThru) ,
primary key nonclustered ( rowID ) ,
unique clustered (id,dtFrom,dtThru) ,
)
go
--
-- seed the candidate range table with the set of all possible contiguous ranges for each id
--
insert #candidate_range ( id , dtFrom , dtThru )
select id = tFrom.id ,
valFrom = tFrom.dtCol ,
valThru = tThru.dtCol
from #source tFrom
join #source tThru on tThru.id = tFrom.id
and tThru.dtCol >= tFrom.dtCol
where 1+datediff(day,tFrom.dtCol,tThru.dtCol) = ( select count(*)
from #source t
where t.id = tFrom.id
and t.dtCol between tFrom.dtCol and tThru.dtCol
)
order by 1,2,3
go
--
-- compare the table to itself. If we find overlapping periods,
-- we'll keep the longest such period and delete the shorter overlapping periods.
--
delete t2
from #candidate_range t1
join #candidate_range t2 on t2.id = t1.id
and t2.rowId != t1.rowID
and t2.length < t1.length
and t2.dtFrom <= t1.dtThru
and t2.dtThru >= t1.dtFrom
go
That's about all there is to it.