I edited my question as it seems like people misunderstood what I wanted.
I have a table which has the following columns:
Company
Transaction ID
Transaction Date
The result I want is:
| COMPANY | Transaction ID |Transaction Date | GROUP
|---------------------|------------------|------------------|----------
| Company A | t_0001 | 01-01-2014 | 1
| Company A | t_0002 | 02-01-2014 | 1
| Company A | t_0003 | 04-01-2014 | 1
| Company A | t_0003 | 10-01-2014 | 2
| Company B | t_0004 | 02-01-2014 | 1
| Company B | t_0005 | 02-01-2014 | 1
| Company C | t_0006 | 03-01-2014 | 1
| Company C | t_0007 | 05-01-2014 | 2
where the transactions and dates are firstly group into companies. The transactions within the company are sorted from the earliest to the latest. The transactions are checked, row by row, if the previous transaction was performed less than 3 days ago in a moving window period.
For example, t_0002 and t_0001 are less than 3 days apart so they fall under group 1. t_0003 and t_0002 are less than 3 days apart so they fall under group 1 even though t_0003 and t_0003 are >= 3 days apart.
I figured the way to go about doing this is to group the data by companies first, following by sorting the transactions by the dates, but I got stuck after this. Like what methods are there I could use to produce this results? Any help on this?
P.S. I am using SQL Server 2014.
I have determined days difference between each company following by transaction id. so if days difference is less than 3 goes to group 1 other are 2. Based on your requirement alter the lag clause and use it.
select *,isnull(
case when datediff(day,
lag([Transaction Date]) over(partition by company order by [transaction id]),[Transaction Date])>=2
then
2
end ,1)group1
from #Table1
If you don't care about the numbering in groups, use
select *,
dense_rank() over(partition by company order by transaction_date) -
(select count(distinct transaction_date) from t
where t1.company=company
and datediff(dd,transaction_date,t1.transaction_date) between 1 and 2) grp
from t t1
order by 1,3
Sample Demo
If continuous numbers are needed for groups, use
select company,transaction_id,transaction_date,
dense_rank() over(partition by company order by grp) grp
from (select *, dense_rank() over(partition by company order by transaction_date) -
(select count(distinct transaction_date) from t
where t1.company=company
and datediff(dd,transaction_date,t1.transaction_date) between 1 and 2) grp
from t t1
) x
order by 1,3
create table xn (
[Company] char(1),
[Transaction ID] char(6),
[Transaction Date] date,
primary key ([Company], [Transaction ID], [Transaction Date])
);
insert into xn values
('A', 't_0001', '2014-01-01'),
('A', 't_0002', '2014-01-02'),
('A', 't_0003', '2014-01-04'),
('A', 't_0003', '2014-01-10'),
('B', 't_0004', '2014-01-02'),
('B', 't_0005', '2014-01-02'),
('C', 't_0006', '2014-01-03'),
('C', 't_0007', '2014-01-05');
Each query builds on the one before. There are more concise ways to write queries like this, but I think this way helps when you're learning window functions like lag(...) over (...).
The first one here brings the previous transaction date into the "current" row.
select
[Company],
[Transaction ID],
[Transaction Date],
lag ([Transaction Date]) over (partition by [Company] order by [Transaction Date]) as [Prev Transaction Date]
from xn
This query determines the number of days between the "current" transaction date and the previous transaction date.
select
[Company],
[Transaction ID],
[Transaction Date],
[Prev Transaction Date],
DateDiff(d, [Prev Transaction Date], [Transaction Date]) as [Days Between]
from (select
[Company],
[Transaction ID],
[Transaction Date],
lag ([Transaction Date]) over (partition by [Company] order by [Transaction Date]) as [Prev Transaction Date]
from xn) x
This does the grouping based on the number of days.
select
[Company],
[Transaction ID],
[Transaction Date],
case when [Days Between] between 0 and 3 then 1
when [Days Between] is null then 1
when [Days Between] > 3 then 2
else 'Ummm'
end as [Group Num]
from (
select
[Company],
[Transaction ID],
[Transaction Date],
[Prev Transaction Date],
DateDiff(d, [Prev Transaction Date], [Transaction Date]) as [Days Between]
from (select
[Company],
[Transaction ID],
[Transaction Date],
lag ([Transaction Date]) over (partition by [Company] order by [Transaction Date]) as [Prev Transaction Date]
from xn) x
) y;
Related
I need to select distinct values but only for 1 column and the other columns need to show the latest record, i.e.:
customerID Order Number Order Date
00001 1000011 2017-01-01
00001 1000022 2017-01-10
00001 1000033 2017-02-01
00002 2000011 2016-12-01
00002 2000022 2017-01-01
00003 3000011 2017-03-01
I would need this to show as:
customerID Order Number Order Date
00001 1000033 2017-02-01
00002 2000022 2017-01-01
00003 3000011 2017-03-01
In Postgresql I would have used SELECT DISTINCT ON (customerID) then ordered by Order Date desc but this isn't possible in SQL Server.
I have tried using the Max function on Order Date, but this still return duplicates in Customer ID when applied like below:
SELECT DISTINCT [CustomerID], [Order No], Max([Order Date])
FROM [T.ORDERS]
GROUP BY [CustomerID], [JOBNO]
You can use JOIN too
SELECT
A.[CustomerID], A.[Order No], A.[Order Date]
FROM [T.ORDERS] A INNER JOIN
(
SELECT
[CustomerID], Max([Order Date])
FROM [T.ORDERS]
GROUP BY A.[CustomerID], [JOBNO]
) B
ON A.[CustomerID]=B.[CustomerID] AND A.[Order Date]=B.[Order Date]
You can use row_number as below:
select * from
( Select *, RowN = Row_Number() over (partition by CustomerID order by [Order date] desc) from #yourtable ) a
where a.RowN = 1
You may have this query
SELECT DISTINCT customerID, MAX(OrderNumber), MAX(OrderDate) FROM table;
distinct is faster than group by
you may try an top 1 with ties function with ROW_NUMBER:
with Data as (
select '00001' customerID, 1000011 orderNumber, cast('20170101' as date) orderdate union all
select '00001' customerID, 1000022 orderNumber, '20170110' orderdate union all
select '00001' customerID, 1000033 orderNumber, '20170201' orderdate union all
select '00002' customerID, 2000011 orderNumber, '20161201' orderdate union all
select '00002' customerID, 2000022 orderNumber, '20170101' orderdate union all
select '00003' customerID, 3000011 orderNumber, '20170301' orderdate)
select top 1 with ties
CustomerID,
OrderNumber,
OrderDate
from Data
order by
ROW_NUMBER() OVER (partition by CustomerID order by orderdate desc)
result:
CustomerID orderNumber orderdate
00001 1000033 2017-02-01
00002 2000022 2017-01-01
00003 3000011 2017-03-01
If your table is not huge, you could try something like this:
SELECT [CustomerID], SUBSTRING(Dummy, 0, CHARINDEX('*', Dummy) - 1) AS [Order Date],
SUBSTRING(Dummy, CHARINDEX('*', Dummy), LEN(Dummy) - CHARINDEX('*', Dummy)) AS [Order No],
FROM (
SELECT [CustomerID],
Max(CONVERT(varchar, [Order Date], 101) + '*' + CAST([Order No] as varchar)) AS Dummy
FROM [T.ORDERS] GROUP BY [CustomerID]
)
What it is doing is to join Order Date and Order No fields with * character (which hopefully doesn't occur anywhere in either column data) and then pick its max value within each group. In the outer SELECT, we then split the max value on the * character to get back the two values.
I have a large data set which for the purpose of this question has 3 fields:
Group Identifier
From Date
To Date
On any given row the From Date will always be less than the To Date but within each group the time periods (which are in no particular order) represented by the date pairs could overlap, be contained one within another, or even be identical.
What I'd like to end up with is a query that condenses the results for each group down to just the continuous periods. For example a group that looks like this:
| Group ID | From Date | To Date |
--------------------------------------
| A | 01/01/2012 | 12/31/2012 |
| A | 12/01/2013 | 11/30/2014 |
| A | 01/01/2015 | 12/31/2015 |
| A | 01/01/2015 | 12/31/2015 |
| A | 02/01/2015 | 03/31/2015 |
| A | 01/01/2013 | 12/31/2013 |
Would result in this:
| Group ID | From Date | To Date |
--------------------------------------
| A | 01/01/2012 | 11/30/2014 |
| A | 01/01/2015 | 12/31/2015 |
I've read a number of articles on date packing but I can't quite figure out how to apply that to my data set.
How can construct a query that would give me those results?
The solution from book "Microsoft® SQL Server ® 2012 High-Performance T-SQL Using Window Functions"
;with C1 as(
select GroupID, FromDate as ts, +1 as type, 1 as sub
from dbo.table_name
union all
select GroupID, dateadd(day, +1, ToDate) as ts, -1 as type, 0 as sub
from dbo.table_name),
C2 as(
select C1.*
, sum(type) over(partition by GroupID order by ts, type desc
rows between unbounded preceding and current row) - sub as cnt
from C1),
C3 as(
select GroupID, ts, floor((row_number() over(partition by GroupID order by ts) - 1) / 2 + 1) as grpnum
from C2
where cnt = 0)
select GroupID, min(ts) as FromDate, dateadd(day, -1, max(ts)) as ToDate
from C3
group by GroupID, grpnum;
Create table:
if object_id('table_name') is not null
drop table table_name
create table table_name(GroupID varchar(100), FromDate datetime,ToDate datetime)
insert into table_name
select 'A', '01/01/2012', '12/31/2012' union all
select 'A', '12/01/2013', '11/30/2014' union all
select 'A', '01/01/2015', '12/31/2015' union all
select 'A', '01/01/2015', '12/31/2015' union all
select 'A', '02/01/2015', '03/31/2015' union all
select 'A', '01/01/2013', '12/31/2013'
I'd use a Calendar table. This table simply has a list of dates for several decades.
CREATE TABLE [dbo].[Calendar](
[dt] [date] NOT NULL,
CONSTRAINT [PK_Calendar] PRIMARY KEY CLUSTERED
(
[dt] ASC
))
There are many ways to populate such table.
For example, 100K rows (~270 years) from 1900-01-01:
INSERT INTO dbo.Calendar (dt)
SELECT TOP (100000)
DATEADD(day, ROW_NUMBER() OVER (ORDER BY s1.[object_id])-1, '19000101') AS dt
FROM sys.all_objects AS s1 CROSS JOIN sys.all_objects AS s2
OPTION (MAXDOP 1);
Once you have a Calendar table, here is how to use it.
Each original row is joined with the Calendar table to return as many rows as there are dates between From and To.
Then possible duplicates are removed.
Then classic gaps-and-islands by numbering the rows in two sequences.
Then grouping found islands together to get the new From and To.
Sample data
I added a second group.
DECLARE #T TABLE (GroupID int, FromDate date, ToDate date);
INSERT INTO #T (GroupID, FromDate, ToDate) VALUES
(1, '2012-01-01', '2012-12-31'),
(1, '2013-12-01', '2014-11-30'),
(1, '2015-01-01', '2015-12-31'),
(1, '2015-01-01', '2015-12-31'),
(1, '2015-02-01', '2015-03-31'),
(1, '2013-01-01', '2013-12-31'),
(2, '2012-01-01', '2012-12-31'),
(2, '2013-01-01', '2013-12-31');
Query
WITH
CTE_AllDates
AS
(
SELECT DISTINCT
T.GroupID
,CA.dt
FROM
#T AS T
CROSS APPLY
(
SELECT dbo.Calendar.dt
FROM dbo.Calendar
WHERE
dbo.Calendar.dt >= T.FromDate
AND dbo.Calendar.dt <= T.ToDate
) AS CA
)
,CTE_Sequences
AS
(
SELECT
GroupID
,dt
,ROW_NUMBER() OVER(PARTITION BY GroupID ORDER BY dt) AS Seq1
,DATEDIFF(day, '2001-01-01', dt) AS Seq2
,DATEDIFF(day, '2001-01-01', dt) -
ROW_NUMBER() OVER(PARTITION BY GroupID ORDER BY dt) AS IslandNumber
FROM CTE_AllDates
)
SELECT
GroupID
,MIN(dt) AS NewFromDate
,MAX(dt) AS NewToDate
FROM CTE_Sequences
GROUP BY GroupID, IslandNumber
ORDER BY GroupID, NewFromDate;
Result
+---------+-------------+------------+
| GroupID | NewFromDate | NewToDate |
+---------+-------------+------------+
| 1 | 2012-01-01 | 2014-11-30 |
| 1 | 2015-01-01 | 2015-12-31 |
| 2 | 2012-01-01 | 2013-12-31 |
+---------+-------------+------------+
; with
cte as
(
select *, rn = row_number() over (partition by [Group ID] order by [From Date])
from tbl
),
rcte as
(
select rn, [Group ID], [From Date], [To Date], GrpNo = 1, GrpFrom = [From Date], GrpTo = [To Date]
from cte
where rn = 1
union all
select c.rn, c.[Group ID], c.[From Date], c.[To Date],
GrpNo = case when c.[From Date] between r.GrpFrom and dateadd(day, 1, r.GrpTo)
or c.[To Date] between r.GrpFrom and r.GrpTo
then r.GrpNo
else r.GrpNo + 1
end,
GrpFrom= case when c.[From Date] between r.GrpFrom and dateadd(day, 1, r.GrpTo)
or c.[To Date] between r.GrpFrom and r.GrpTo
then case when c.[From Date] > r.GrpFrom then c.[From Date] else r.GrpFrom end
else c.[From Date]
end,
GrpTo = case when c.[From Date] between r.GrpFrom and dateadd(day, 1, r.GrpTo)
or c.[To Date] between r.GrpFrom and dateadd(day, 1, r.GrpTo)
then case when c.[To Date] > r.GrpTo then c.[To Date] else r.GrpTo end
else c.[To Date]
end
from rcte r
inner join cte c on r.[Group ID] = c.[Group ID]
and r.rn = c.rn - 1
)
select [Group ID], min(GrpFrom), max(GrpTo)
from rcte
group by [Group ID], GrpNo
A Geometric Approach
Here and elsewhere I've noticed that date packing questions
don't provide a geometric approach to this problem. After all,
any range, date-ranges included, can be interpreted as a line.
So why not convert them to a sql geometry type and utilize
geometry::UnionAggregate to merge the ranges. So I gave a stab
at it with your post.
Code Description
In 'numbers':
I build a table representing a sequence
Swap it out with your favorite way to make a numbers table.
For a union operation, you won't ever need more rows than in
your original table, so I just use it as the base to build it.
In 'mergeLines':
I convert the dates to floats and use those floats
to create geometrical points.
In this problem, we're working in
'integer space,' meaning there are no time considerations, and so
an begin date in one range that is one day apart from an end date
in another should be merged with that other. In order to make
that merge happen, we need to convert to 'real space.', so we
add 1 to the tail of all ranges (we undo this later).
I then connect these points via STUnion and STEnvelope.
Finally, I merge all these lines via UnionAggregate. The resulting
'lines' geometry object might contain multiple lines, but if they
overlap, they turn into one line.
In the outer query:
I use the numbers CTE to extract the individual lines inside 'lines'.
I envelope the lines which here ensures that the lines are stored
only as its two endpoints.
I read the endpoint x values and convert them back to their time
representations, ensuring to put them back into 'integer space'.
The Code
with
numbers as (
select row_number() over (order by (select null)) i
from #spans -- Where I put your data
),
mergeLines as (
select groupId,
lines = geometry::UnionAggregate(line)
from #spans
cross apply (select
startP = geometry::Point(convert(float,fromDate), 0, 0),
stopP = geometry::Point(convert(float,toDate) + 1, 0, 0)
) pointify
cross apply (select line = startP.STUnion(stopP).STEnvelope()) lineify
group by groupId
)
select groupId, fromDate, toDate
from mergeLines ml
join numbers n on n.i between 1 and ml.lines.STNumGeometries()
cross apply (select line = ml.lines.STGeometryN(i).STEnvelope()) l
cross apply (select
fromDate = convert(datetime, l.line.STPointN(1).STX),
toDate = convert(datetime, l.line.STPointN(3).STX) - 1
) unprepare
order by groupId, fromDate;
Today my issue has to do with marking continuous periods of time where a given criteria is met. My raw data of interest looks like this.
Salesman ID Pay Period ID Total Commissionable Sales (US dollars)
1 101 525
1 102 473
1 103 672
1 104 766
2 101 630
2 101 625
.....
I want to mark continous periods of time where a salesman has achieved $500 of sales or more. My ideal result should look like this.
[Salesman ID] [Start time] [End time] [# Periods] [Average Sales]
1 101 101 1 525
1 103 107 5 621
2 101 103 3 635
3 104 106 3 538
I know how to everything else, but I cannot figure out a non-super expensive way to identify start and end dates. Help!
Try something like this. The innermost select-statement basically adds a new column to the original table with a flag determining when a new group begins. Outside this statement, we use this flag in a running total, that then enumerates the groups - we call this column [Group ID]. All that is left, is then to filter out the rows where [Sales] < 500, and group by [Salesman ID] and [Group ID].
SELECT [Salesman ID], MIN([Pay Period ID]) AS [Start time],
MAX([Pay Period ID]) AS [End time], COUNT(*) AS [# of periods],
AVG([Sales]) AS [Average Sales]
FROM (
SELECT [Salesman ID], [Pay Period ID], [Sales],
SUM(NewGroup) OVER (PARTITION BY [Salesman ID] ORDER BY [Pay Period ID]
ROWS UNBOUNDED PRECEDING) AS [Group ID]
FROM (
SELECT T1.*,
CASE WHEN T1.[Sales] >= 500 AND (Prev.[Sales] < 500 OR Prev.[Sales] IS NULL)
THEN 1 ELSE 0 END AS [NewGroup]
FROM MyTable T1
LEFT JOIN MyTable Prev ON Prev.[Salesman ID] = T1.[Salesman ID]
AND Prev.[Pay Period ID] = T1.[Pay Period ID] - 1
) AS InnerQ
) AS MiddleQ
WHERE [Sales] >= 500
GROUP BY [Salesman ID], [Group ID]
I want to take the last request date from a union column.
I Have this code to display the last request date:
;with cte
as
(
select
[Date], [Badge id], Name, Reason, [Item1] item,
row_number() over (partition by [Badge id], [Item1] order by getdate()) rn
from tbl_Request2
where [Item1] Is Not null
union
select
[Date], [Badge id], Name, Reason,[Item2] item,
row_number() over (partition by [Badge id], [Item2] order by getdate()) rn
from tbl_Request2 where [Item2] is not null union
)
Select
T.[Badge ID], T.Name, T.Item, T.Reason, T. [Date] as [Current Request],
ISNULL((Select top 1 [Date]
from CTE
where
CTE.[Badge Id]=T.[Badge Id] and
CTE.[Item] = T.item and
CTE. [Date] < T. [Date]),T. [Date]) as [Last Requested]
From CTE T
order by [Badge ID]
it does display the last record, but not the expected. It display, for example:
ID 001 request item1 on 12/05/2014 THEN
ID 001 request again Item1 on 13/05/2014 ; it display the last requested 12/05/2014 ; and then
that ID 001 request again Item1 on 14/05/2014 ; it display the last requested 12/05/2014 ; --> and Here is the ERROR
I want it to display the last request 13/05/2014
The expected table:
ID | Items | Date | Last Request Date
001 | Item1 | 12/05/2014 | 12/05/2014 --> lets say this is the first request of ID001
002 | Item2 | 25/04/2014 | 20/05/2014
001 | Item1 | 13/05/2014 | 12/05/2014 --> It display the date of first requested
001 | Item1 | 14/05/2014 | 13/05/2014 --> display the second request date
Do you have any suggestion about this error?
Sorry for posting it again. I already ask this question yesterday, but it's still have some error.
Thanks in advances....
It's kind of embarassing. I ask a question and answer it by myself. I just realize something missing on my code. I only need to remove top 1 and add max(date) on my code. Sorry for asking something that was my mistake.
;with cte
as
(
select
[Date], [Badge id], Name, Reason, [Item1] item,
row_number() over (partition by [Badge id], [Item1] order by getdate()) rn
from tbl_Request2
where [Item1] Is Not null
union
select
[Date], [Badge id], Name, Reason,[Item2] item,
row_number() over (partition by [Badge id], [Item2] order by getdate()) rn
from tbl_Request2 where [Item2] is not null union
)
Select
T.[Badge ID], T.Name, T.Item, T.Reason, T. [Date] as [Current Request],
ISNULL((Select max[Date] as [date]
from CTE
where
CTE.[Badge Id]=T.[Badge Id] and
CTE.[Item] = T.item and
CTE. [Date] < T. [Date]),T. [Date]) as [Last Requested]
From CTE T
order by [Badge ID]
EDIT: begin_date and end_date are type DATE columns in any table.
I have the following dimension table which provides how many total days each month has for years 1980 through 2500:
CREATE TABLE total_days
(
from_date DATE,
to_date DATE,
days_in_month SMALLINT
);
from_date to_date days_in_month
1980-01-01 1980-01-31 31
1980-02-01 1980-02-29 29
...
2500-11-01 2500-11-30 30
2500-12-01 2500-12-31 31
How should I construct an SQL query to obtain an accurate end_date if I were to add 360 months to begin_date?.. Do I need to alter the dimension table in any way to achieve my goal?
EDIT: The date arithmetic must be performed without using any native SQL date arithmetic functions. It must be done by looking up the begin_date in the dimension table.
I guess you've got your reasons - Here's a very simple hack -
Assuming that the fact table has a row for every month -
Add a new column that represents monthnumber, start it at 1 and autoincrement it up chronolgically ordered, not starting over with each year.
SELECT B.*
FROM SO_total_days2 A
INNER JOIN SO_total_days2 B ON B.monthnumber = A.monthnumber + 360
WHERE A.from_date = '2010-01-01'
from_date to_date days_in_month monthnumber
1980-01-01 1980-01-31 31 1
1980-02-01 1980-02-29 29 2
1980-03-03 1980-03-31 31 3
...
1981-01-01 1981-01-31 31 13
1981-12-01 1981-12-31 31 24
...
1985-01-01 1985-01-31 31 49
1985-12-01 1985-12-31 31 60
If I were to be doing database agnostically, I'd change the fact table a bit:
CREATE TABLE total_days
(
year INT,
month TINYINT,
from_date DATE,
to_date DATE,
days_in_month SMALLINT
);
year month from_date to_date days_in_month
------------------------------------------------
1980 1 1980-01-01 1980-01-31 31
1980 2 1980-02-01 1980-02-29 29
...
2500 11 2500-11-01 2500-11-30 30
2500 12 2500-12-01 2500-12-31 31
Then you could use something like:
SELECT td.*
FROM
total_days AS td
CROSS JOIN
( SELECT year, month
FROM total_days
WHERE from_date <= #StartingDate
AND #StartingDate <= to_date
) AS st
CROSS JOIN
( SELECT 360 AS add_months ) AS param
WHERE td.year = st.year + ( st.month -1 + add_months ) / 12
AND td.month = 1 + ( st.month - 1 + add_months ) % 12 )
;
or the simpler (but a bit harder to optimize for efficiency:
WHERE 12 * td.year + td.month =
12 * st.year + st.month + add_months
this is what i imagine your "fact table" looks like:
declare #dt datetime
set #dt = '7-1-2012'
;
with date_table as (
select #dt as [Start Date],
dateadd(d,-1,dateadd(mm,1,#dt)) as [End Date],
datepart(d,dateadd(d,-1,dateadd(mm,1,#dt))) as [Days]
union ALL
select dateadd(mm, 1, [Start Date]) as [Start Date],
dateadd(d,-1,dateadd(mm,1,dateadd(mm, 1, [Start Date]))) as [End Date],
datepart(d,dateadd(d,-1,dateadd(mm,1,dateadd(mm, 1, [Start Date])))) as [Days]
from date_table
where dateadd(mm, 1, [Start Date]) <= dateadd(m,500,#dt))
select [Start Date], [End Date], [Days]
into #temp
from date_table
option (MAXRECURSION 0)
this is selecting the dates. (notice how there is no DATEADD or DATEPART included in these statements)
select finish.[Start Date], finish.[End Date], finish.[Days]
from (select rownum
from (select [Start Date], [End Date], [Days], row_number() over (order by [Start Date]) as rownum
from #temp) as x
where x.[Start Date] = '2012-07-01 00:00:00.000' ) as start
join (select [Start Date], [End Date], [Days],
row_number() over (order by [Start Date]) as rownum
from #temp) as finish
on finish.rownum = start.rownum + 360
i read your comments down below... if you're trying to sum up the days or something this is how you could do it: (so starting with July 1, 2012 and going for 360 months... the date_diff_days result would be the total number of days for the 360 months... using that #temp table i made... which i assume is similar to your fact table... I got 10957 days)
select sum(dayscount.[Days]) as date_diff_days
from (select rownum
from (select [Start Date], [End Date], [Days], row_number() over (order by [Start Date]) as rownum
from #temp) as x
where x.[Start Date] = '2012-07-01 00:00:00.000' ) as start
join (select [Start Date], [End Date], [Days],
row_number() over (order by [Start Date]) as rownum
from #temp) as finish
on finish.rownum = start.rownum + 360
join (select [Start Date], [End Date], [Days],
row_number() over (order by [Start Date]) as rownum
from #temp) as dayscount
on dayscount.rownum >= start.rownum and
dayscount.rownum < finish.rownum
Why the fact table? Most DBs have native support for date/time manipulation. In MS SQL Server, you would do this with DATEADD.
I see you tagged your question with "informix" but didn't specify any version details in your question. Here's an ADD_MONTHS function from IBM Informix 11.50.
you can use interval
mysql> SELECT '2008-12-31 23:59:59' + INTERVAL 1 month;
+------------------------------------------+
| '2008-12-31 23:59:59' + INTERVAL 1 month |
+------------------------------------------+
| 2009-01-31 23:59:59 |
+------------------------------------------+
1 row in set (0.00 sec)
mysql> select now();
+---------------------+
| now() |
+---------------------+
| 2012-07-03 12:27:46 |
+---------------------+
1 row in set (0.00 sec)
mysql> SELECT now() + INTERVAL 30 month;
+---------------------------+
| now() + INTERVAL 30 month |
+---------------------------+
| 2015-01-03 12:27:49 |
+---------------------------+
1 row in set (0.00 sec)
EDIT:
mysql> SELECT STR_TO_DATE('01,5,2013','%d,%m,%Y') + interval 30 month;
+---------------------------------------------------------+
| STR_TO_DATE('01,5,2013','%d,%m,%Y') + interval 30 month |
+---------------------------------------------------------+
| 2015-11-01 |
+---------------------------------------------------------+
1 row in set (0.00 sec)
EDIT 2:
mysql> show create table tiempo;
+--------+------------------------------------------------------------------------------------------------+
| Table | Create Table |
+--------+------------------------------------------------------------------------------------------------+
| tiempo | CREATE TABLE `tiempo` (
`fecha` datetime DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
+--------+------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
mysql> select fecha + interval 20 month from tiempo;
+---------------------------+
| fecha + interval 20 month |
+---------------------------+
| NULL |
| 2001-10-02 02:02:02 |
+---------------------------+
2 rows in set (0.00 sec)
If you have the from date exactly as it is listed and you are always adding months, could you use this:
SELECT max (to_date)
FROM (SELECT ROW_NUMBER () OVER (ORDER BY from_date) AS Row,
from_date,
to_date,
days_in_month
FROM total_days
WHERE from_date > '1/1/1982'
GROUP BY from_date, to_date, days_in_month) MyDates
WHERE Row <= 360