I'm building a scheduling system where I store an initial appointment and how often it repeats. My table looks something like this:
CREATE TABLE (
id serial primary key,
initial_timestamp timestamp not null,
recurring interval
);
id initial_timestamp recurring
27 2020-06-02 3 weeks
24 2020-06-03 10 days
Assuming I can handle the time component, and that the only intervals we'll run across are days and weeks, how can I find the when those two appointments will overlap? For example, the previous example will overlap on June 23rd. It's 3 weeks from June 2nd and 20 days from June 3rd, so the first appointment will repeat once on that day and the second appointment will repeat on the 13th and then the 23rd.
In my program, I have another date, say June 7th with a recurring interval of 12 days. What query can I use to find the time it will take for a recurring appointment starting on June 7th to overlap with every existing recurring appointment? So for example, this appointment will repeat on June 19, July 1, and July 13. Appointment #24 from the table above will repeat on June 13, June 23, July 3, and July 13, if my math is right. I'd like my query comparing this appointment to appointment #24 to return, first of all, July 13th, then also how long it would take to repeat again, which I assume would be like finding the least common multiple of the two intervals, in this case, 60 days (LCM of 12 and 10). So I could expect it to repeat again on July 13 + 60 days = Sept 11.
I tried using generate_series, but since I don't know the size of the intervals, the series would have to continue infinitely, right? It's probably not the best choice here. I assume the answer would have more to do with the math of multiplying intervals somehow.
Note that recurring can be null, so I'd assume there has to be something like WHERE recurring IS NOT NULL in there somewhere. Another thing to note: no initial appointments overlap. I've already guarded against that. The search term doesn't overlap with any of the appointment's initial times either.
If it helps at all, I'm using PHP 5.3 to send queries to Postgres 9.4 (I know, it's an ancient setup). I'd prefer to do most of this in SQL just because most of the other logic is in SQL right now, so I can just run the query and start manipulating the results with PHP.
So in summary, if my math is right, what Postgres query should I use with the table above to compare a given date and interval with every date and interval pair from the table to find the next date those two overlap and how far apart each overlap instance would be?
This was hard.
WITH RECURSIVE moving_target(initial_timestamp, recurring) AS (
VALUES (timestamp '2020-06-07', interval '12 days') -- search term
)
, x AS ( -- advance to the closest day before or at moving target
SELECT t.id
, t_date + ((m_date - t_date) / t_step) * t_step AS t_date
, t_step
, m.*
FROM ( -- normalize table data
SELECT id
, initial_timestamp::date AS t_date
, EXTRACT ('days' FROM recurring)::int AS t_step
FROM tbl
WHERE recurring IS NOT NULL -- exclude!
) t
CROSS JOIN ( -- normalize input
SELECT initial_timestamp::date AS m_date
, EXTRACT ('days' FROM recurring)::int AS m_step
FROM moving_target
) m
)
, rcte AS ( -- recursive CTE
SELECT id, t_date, t_step, m_date, m_step
, ARRAY[m_date - t_date] AS gaps -- keep track of gaps
, CASE
WHEN t_date = m_date THEN true -- found match
WHEN t_step % m_step = 0 THEN false -- can never match
WHEN (m_date - t_date) % 2 = 1 -- odd gap ...
AND t_step % 2 = 0 -- ... but even steps
AND m_step % 2 = 0 THEN false -- can never match
-- WHEN <stop conditions?> THEN false -- hard to determine!
-- ELSE null -- keep searching
END AS match
FROM x
UNION ALL
SELECT id, t_date, t_step, m_date, m_step
, gaps || m_date - t_date
, CASE
WHEN t_date = m_date THEN true
WHEN (m_date - t_date) = ANY (gaps) THEN false -- gap repeated!
-- ELSE null -- keep searching
END AS match
FROM (
SELECT id
, t_date + (((m_date + m_step) - t_date) / t_step) * t_step AS t_date
, t_step
, m_date + m_step AS m_date -- + 1 step
, m_step
, gaps
FROM rcte
WHERE match IS NULL
) sub
)
SELECT id, t.initial_timestamp, t.recurring
, CASE WHEN r.match THEN r.t_date END AS match_date
FROM rcte r
JOIN tbl t USING (id)
WHERE r.match IS NOT NULL;
db<>fiddle here - with more test rows
There may be potential to improve further. The core problem is in the realm of
prime factorization. As it seems reasonable to expect fairly small intervals, I solved it by testing for cycles: If, while incrementally stepping forward, a gap between dates is detected that we have seen before, and dates didn't overlap yet, they will never overlap and we can stop. This loops at most GREATEST(m_step, t_step) times (the number of days in the bigger interval), so it shouldn't scale terribly.
I identified some basic mathematical stop conditions to avoid looping in hopeless cases a priori. There may be more ...
Explaining everything that's going on here is more work than devising the query. I added comments that should explain basics ...
Then again, while intervals are small, a "brute force" approach based on generate_series() may still be faster.
Related
I have two date fields, DATE_FIELD_ONE = 8/30/2018 and DATE_FIELD_TWO = DATE_FIELD_ONE + 20. I need to find what DATE_FIELD_TWO should be if I'm only added 20 business days . How would I accomplish this? I thought maybe trying 'DY' but not sure how to get it to work. Thanks.
CASE WHEN TO_CHAR(TO_DATE(DATE_FIELD_ONE),'DY')='SAT' THEN 1 ELSE 0 END
CASE WHEN TO_CHAR(TO_DATE(DATE_FIELD_ONE),'DY')='SUN' THEN 1 ELSE 0 END
You may try this :
select max(date_field_two) as date_field_two
from
(
select date'2018-08-30'+
cast(case when to_char(date'2018-08-30'+level,'D','NLS_DATE_LANGUAGE=ENGLISH')
in ('6','7') then
0
else
level
end as int) as date_field_two,
sum(cast(case when to_char(date'2018-08-30'+level,'D','NLS_DATE_LANGUAGE=ENGLISH')
in ('6','7') then
0
else
1
end as int)) over (order by level) as next_day
from dual
connect by level <= 20*1.5
-- 20 is the day to be added, every time 5(#of business days)*1.5 > 7(#of week days)
-- 7=5+2<5+(5/2)=5*(1+1/2)=5*1.5 [where 1.5 is just a coefficient might be replaced a greater one like 2]
-- so 4*5*1.5=20*1.5 > 4*7
)
where next_day = 20;
DATE_FIELD_TWO
-----------------
27.09.2018
by using connect by dual clause.
P.S. Ignored the case for public holidays, which differ from one culture to another , depending on the question being related with only weekends.
Rextester Demo
Edit : Assume you have a national holidays on '2018-09-25' and '2018-09-26' (in this set of days), then consider the following :
select max(date_field_two) as date_field_two
from
(
select date'2018-08-30'+
(case when to_char(date'2018-08-30'+level,'D','NLS_DATE_LANGUAGE=ENGLISH')
in ('6','7') then
0
when date'2018-08-30'+level in (date'2018-09-25',date'2018-09-26') then
0
else
level
end) as date_field_two,
sum(cast(case when to_char(date'2018-08-30'+level,'D','NLS_DATE_LANGUAGE=ENGLISH')
in ('6','7') then
0
when date'2018-08-30'+level in (date'2018-09-25',date'2018-09-26') then
0
else
1
end as int)) over (order by level) as next_day
from dual
connect by level <= 20*2
)
where next_day = 20;
DATE_FIELD_TWO
-----------------
01.10.2018
which iterates one day next, as in this case, unless this holiday coincides with weekend.
you can define workdays to be whatever you like if you use a PL/SQL function
Have a simple prototype here - without any holidays - but it could be adapted for that purpose using the same kind of logic.
create or replace function add_business_days (from_date IN date, bd IN integer) return date as
fd date := trunc(from_date,'iw');
cnt int := (from_date-fd)+bd-1;
ww int := ceil(cnt/5);
wd int := mod(cnt,5);
begin
return from_date + (ww*7)+wd;
end;
/
I realize you already have an answer, but for what it's worth this is something we deal with all the time and have what has turned out to be a very good solution.
In effect, we maintain a separate table called "work days" that has every conceivable date we would ever compare (and that definition will vary from application to application, of course -- but in any case it will never be "huge" by RDBMS standards). There is a boolean flag that dictates if the date is a work day or a weekend/holiday, but more importantly there is an index value that only increments on work days. The table looks like this:
The advantage to this is transparency and scalability. If you want the difference between two dates in work days:
select
h.entry_date, h.invoice_date, wd2.workday_index - wd1.workday_index as delta
from
sales_order_data h
join util.work_days wd1 on h.sales_order_entry_dte = wd1.cal_date
join util.work_days wd2 on h.invoice_dte = wd2.cal_date
If you need to take a date in a table and add 20 days (like your original problem statement):
select
h.date_field_1, wd2.cal_date as date_field_1_plus_20
from
my_table h
join util.work_days wd1 on h.date_field_1 = wd1.cal_date
join util.work_days wd2 on
wd1.workday_index + 20 = wd2.workday_index and
wd2.is_workday
(disclaimer, this is in PostgreSQL, which is why I have the boolean. In Oracle, I'm guessing you need to change that to an integer and say =1)
Also, for the bonus question, this also gives two different options for defining "work day," one that rolls forward and another that rolls backwards (hence the workday_index and workday_index_back). For example, if you need something on a Saturday, and Saturday is not a work day, that means you need it on Friday. Conversely, if something is to be delivered on Saturday, and Saturday is not on a work day, then that means it will be available on Monday. The context of how to handle non-workdays differs, and this method affords you the option of chosing the right one.
As a final selling point, this option allows you to define holidays as non-work days also... and you can do this or not do this; it's up to you. The solution permits either option. You could theoretically add two more columns for work day index weekend only that gave you both options.
Date jm Text
-------- ---- ----
6/3/2015 ne Good
6/4/2015 ne Good
6/5/2015 ne Same
6/8/2015 ne Same
I want to count how often the "same" value occurs in a set of consecutive days.
I dont want to count the value for the whole database. Now on the current date it is 2 (above example).
It is very important for me that "Same" never occurs...
The query has to ignore the weekend (6 and 7 june).
Date jm Text
-------- ---- ----
6/3/2015 ne Same
6/4/2015 ne Same
6/5/2015 ne Good
6/8/2015 ne Good
In this example the count is zero
Okay, I'm starting to get the picture, although at first I thought you wanted to count by jm, and now it seems you want to count by Text = 'Same'. Anyway, that's what this query should do. It gets the row for the current date. Is connects all previous rows and counts them. Also, it shows whether the current text (and that of the connected rows).
So the query will return one row (if there is one for today), which will show the date, jm and Text of the current date, the number of consecutive days for which the Text has been the same (just in case you want to know how many days it is 'Good'), and the number of days (either 0 or the same as the other count) for which the Text has been 'Same'.
I hope this query is right, or at least it gives you an idea of how to solve the problem using CONNECT BY. I should mention I based the 'Friday-detection' on this question.
Also, I don't have Oracle at hand, so please forgive me for any minor syntax errors.
WITH
VW_SAMESTATUSES AS
( SELECT t.*
FROM YourTable t
START WITH -- Start with the row for today
t.Date = trunc(sysdate)
CONNECT BY -- Connect to previous row that have a lower date.
-- Note that PRIOR refers to the prior record, which is
-- actually the NEXT day. :)
t.Date = PRIOR t.Date +
CASE MOD(TO_CHAR(t.Date, 'J'), 7) + 1
WHEN 5 THEN 3 -- Friday, so add 3
ELSE 1 -- Other days, so add one
END
-- And the Text also has to match to the one of the next day.
AND t.Text = PRIOR t.Text)
SELECT s.Date,
s.jm,
MAX(Text) AS CurrentText, -- Not really MAX, they are actually all the same
COUNT(*) AS ConsecutiveDays,
COUNT(CASE WHEN Text = 'Same' THEN 1 END) as SameCount
FROM VW_SAMESTATUSES s
GROUP BY s.Date,
s.jm
This recursive query (available from Oracle version 11g) might be useful:
with s(tcode, tdate) as (
select tcode, tdate from test where tdate = date '2015-06-08'
union all
select t.tcode, t.tdate from test t, s
where s.tcode = t.tcode
and t.tdate = s.tdate - decode(s.tdate-trunc(s.tdate, 'iw'), 0, 3, 1) )
select count(1) cnt from s
SQLFiddle
I prepared sample data according to your original question, without further edits, you can see them in attached SQLFiddle. Additional conditions for column 'Text'
are very simple, just add something like ... and Text ='Same' in where clauses.
In current version query counts number of previous days starting from given date (change it in line 2) where dates are consecutive (excluding weekend days) and values in column tcode is the same for all days.
Part: decode(s.tdate-trunc(s.tdate, 'iw'), 0, 3, 1) is for substracting days depending if it's Monday or other day, and should work independently from NLS settings.
I have a pretty huge table with columns dates, account, amount, etc. eg.
date account amount
4/1/2014 XXXXX1 80
4/1/2014 XXXXX1 20
4/2/2014 XXXXX1 840
4/3/2014 XXXXX1 120
4/1/2014 XXXXX2 130
4/3/2014 XXXXX2 300
...........
(I have 40 months' worth of daily data and multiple accounts.)
The final output I want is the average amount of each account each month. Since there may or may not be record for any account on a single day, and I have a seperate table of holidays from 2011~2014, I am summing up the amount of each account within a month and dividing it by the number of business days of that month. Notice that there is very likely to be record(s) on weekends/holidays, so I need to exclude them from calculation. Also, I want to have a record for each of the date available in the original table. eg.
date account amount
4/1/2014 XXXXX1 48 ((80+20+840+120)/22)
4/2/2014 XXXXX1 48
4/3/2014 XXXXX1 48
4/1/2014 XXXXX2 19 ((130+300)/22)
4/3/2014 XXXXX2 19
...........
(Suppose the above is the only data I have for Apr-2014.)
I am able to do this in a hacky and slow way, but as I need to join this process with other subqueries, I really need to optimize this query. My current code looks like:
<!-- language: lang-sql -->
select
date,
account,
sum(amount/days_mon) over (partition by last_day(date))
from(
select
date,
-- there are more calculation to get the account numbers,
-- so this subquery is necessary
account,
amount,
-- this is a list of month-end dates that the number of
-- business days in that month is 19. similar below.
case when last_day(date) in ('','',...,'') then 19
when last_day(date) in ('','',...,'') then 20
when last_day(date) in ('','',...,'') then 21
when last_day(date) in ('','',...,'') then 22
when last_day(date) in ('','',...,'') then 23
end as days_mon
from mytable tb
inner join lookup_businessday_list busi
on tb.date = busi.date)
So how can I perform the above purpose efficiently? Thank you!
This approach uses sub-query factoring - what other RDBMS flavours call common table expressions. The attraction here is that we can pass the output from one CTE as input to another. Find out more.
The first CTE generates a list of dates in a given month (you can extend this over any range you like).
The second CTE uses an anti-join on the first to filter out dates which are holidays and also dates which aren't weekdays. Note that Day Number varies depending according to the NLS_TERRITORY setting; in my realm the weekend is days 6 and 7 but SQL Fiddle is American so there it is 1 and 7.
with dates as ( select date '2014-04-01' + ( level - 1) as d
from dual
connect by level <= 30 )
, bdays as ( select d
, count(d) over () tot_d
from dates
left join holidays
on dates.d = holidays.hol_date
where holidays.hol_date is null
and to_number(to_char(dates.d, 'D')) between 2 and 6
)
select yt.account
, yt.txn_date
, sum(yt.amount) over (partition by yt.account, trunc(yt.txn_date,'MM'))
/tot_d as avg_amt
from your_table yt
join bdays
on bdays.d = yt.txn_date
order by yt.account
, yt.txn_date
/
I haven't rounded the average amount.
You have 40 month of data, this data should be very stable.
I will assume that you have a cold body (big and stable easily definable range of data) and hot tail (small and active part).
Next, I would like to define a minimal period. It is a data range that is a smallest interval interesting for Business.
It might be year, month, day, hour, etc. Do you expect to get questions like "what was averege for that account between 1900 and 12am yesterday?".
I will assume that the answer is DAY.
Then,
I will calculate sum(amount) and count() for every account for every DAY of cold body.
I will not create a dummy records, if particular account had no activity on some day.
and I will save day, account, total amount, count in a TABLE.
if there are modifications later to the cold body, you delete and reload affected day from that table.
For hot tail there might be multiple strategies:
Do the same as above (same process, clear to support)
always calculate on a fly
use materialized view as an averege between 1 and 2.
Cold body table totalc could also be implemented as materialized view, but if data never change - no need to rebuild it.
With this you go from (number of account) x (number of transactions per day) x (number of days) to (number of account)x(number of active days) number of records.
That should speed up all following calculations.
I have a recursive sql that I am running which works but gives me the following warning.
SQL0347W The recursive common table expression "DT_LAST_YEAR" may
contain an infinite loop. SQLSTATE=01605
How can I get rid of the warning?
INSERT INTO REP_MAN_TRAN_COUNTS (SITEDIRECTORYID, BUSINESSDATE, TRANCOUNT)
WITH dt_this_year (level, seqdate) AS
(
SELECT 1, date(current timestamp) -7 DAYS FROM sysibm.sysdummy1
UNION ALL
SELECT level, seqdate + level days FROM dt_this_year WHERE level < 1000 AND seqdate + 1 days < date(current timestamp)
)
,dt_last_year (level, seqdate) AS
(
SELECT 1, date(current timestamp) -7 DAYS - 1 year FROM sysibm.sysdummy1
UNION ALL
SELECT level, seqdate + level days FROM dt_last_year WHERE level < 1000 AND seqdate + 1 days < date(current timestamp) -1 year
)
select 10049, date(dts.calendarday), count(*) trancount
from (
SELECT seqdate AS calendarday FROM dt_this_year
UNION
SELECT seqdate AS calendarday FROM dt_last_year
) dts LEFT JOIN ccftrxheader ccf
ON date(dts.calendarday) = date(ccf.businessdate)
WHERE ccf.sitedirectoryid=10049
GROUP BY ccf.sitedirectoryid,dts.calendarday
How do you get rid of warnings?
By changing the code so that it no longer generates the warning in the first place. Hiding warnings is problematic, because it often disguises a potentially larger problem. I'm fairly certain it's complaining here because the termination clause you provide for level can't ever be reached (because you never manipulate it).
Personally, I'd probably re-write your query into something like this:
INSERT INTO Rep_Man_Tran_Counts (siteDirectoryId, businessDate, tranCount)
WITH dt_Calendar_Data (level, calendarDay) AS
(SELECT l, c
FROM (VALUES (1, CURRENT_DATE - 7 DAYS),
(1, CURRENT_DATE - 7 DAYS - 1 YEAR)) t(l, c)
UNION ALL
SELECT level + 1, calendarDay + 1 DAYS
FROM dt_Calendar_Data
WHERE level < 7)
SELECT 10049, dtCal.calendarDay, COALESCE(COUNT(*), 0) as tranCount
FROM dt_Calendar_Data dtCal
LEFT JOIN ccftrxHeader ccf
ON ccf.businessDate = dtCal.calendarDay
AND ccf.siteDirectoryId = 10049
GROUP BY dtCal.seqDate
(untested, as you've provided no sample data, and I don't have a DB2 instance)
I've assumed you actually wanted a LEFT JOIN, as opposed to the regular INNER JOIN you were actually getting (due to the condition in the WHERE clause, and probably the GROUP BY as well). To avoid adding nulls to your data, I've wrapped the count in COALESCE(...), which will give you 0 instead.
I've also assumed that businessDate is a DATE type, and not a timestamp. If it is a timestamp this query needs to be adjusted (note that the function you were using would for the optimizer to ignore indices).
Note that order of operations with dates matter! Thankfully when dealing with year ranges, you only have one day to worry about in the Gregorian calendar (February 29th). Your current ordering will compare identical calendar days at the start of the range (which one has the "gap" depends on whether this year or last year is a leap year).
EDIT:
Sure, lets look at that CTE:
FROM(VALUES (1, CURRENT_DATE - 7 DAYS),
(1, CURRENT_DATE - 7 DAYS - 1 YEAR)) t(l, c)
This is just a standard VALUES clause used as a table reference. This is the SQL Standard way to construct a small temp table (Rather than referencing the dummy tables, which tend to be vendor-specific). If the statement is run on 2014-02-26 then the resulting table will be:
t
l c
===============
1 "2014-02-19"
1 "2013-02-19"
These columns get renamed by the column listing of the CTE, which are then referenced in the join (and in the case of a recursive CTE, by the recursive portion).
This then forms the starting data for the rest of the recursive query:
UNION ALL
SELECT level + 1, calendarDay + 1 DAYS
FROM dt_Calendar_Data
WHERE level < 7
In DB2 (and some other RDBMSs), recursive CTEs essentially execute iteratively, acting off the results of the "previous" invocation. Every time around, we increment level, and add another day to calendarDay. The "next" rows are then:
level calendarDay
======================
2 "2014-02-20"
2 "2013-02-20"
This continues until the "previous" row has level = 7, which means a new row is not generated (check the WHERE clause). In general, it's best to only have one termination condition (and make progress every iteration), to make it easier for the optimizer to spot. The resulting data is then in the ranges:
level calendarDay
=====================
1 "2014-02-19"
. .....
7 "2014-02-26"
1 "2013-02-19"
. .....
7 "2013-02-26"
... as a side note, I generated the this year/last year data together to make the number of references shorter. If you only needed the one year, level is unnecessary.
I have basic knowledge of SQL and have a question:
I am trying to select data from a time series (date and windspeed). I want to select the original wind speed value if it lies between hours 7 and 21. If the hour is outside this range I would like to assign the wind speed to the previous wind speed at hour 21. There is also a concern that there is the occasional point where hour 21 does not exist and would like to assign the windspeed as hour 20... 19 etc until it finds the next available hour.
SELECT
date,
CASE WHEN DATEPART(HH,date) < 7 OR DATEPART(HH,date) > 21
THEN '<WIND SPEED AT HOUR 21> ELSE <WIND SPEED> END AS ModifiedWindspeed
,WindSpeed, winddirection
from TerrainCorrectedHourlyWind w
This might make things clearer. If the hour is in the specified range, select windspeed. If not then select the wind speed from the prior day at 21 hours.
Though you've tagged the question mysql, I'm guessing this is actually SQL Server because of the DATEPART() function used. Try the following, which uses an OUTER APPLY to get your alternate value:
SELECT Date
, CASE
WHEN DATEPART(HOUR, Date)BETWEEN 7 AND 21 THEN w.WindSpeed
ELSE m.WindSpeed
END AS ModifiedWindSpeed
, w.WindSpeed
, w.WindDirection
FROM TerrainCorrectedHourlyWind AS w
OUTER APPLY(SELECT TOP 1 WindSpeed
FROM TerrainCorrectedHourlyWind
WHERE DATEPART(HOUR, Date)BETWEEN 7 AND 21
AND Date < w.Date
ORDER BY Date DESC)AS m;
Just to explain what this is doing--the OUTER APPLY will get the single most recent record (TOP 1 and ORDER BY Date DESC) for dates prior to the record in question (Date < w.Date) as well as within the hours specified. The CASE near the top chooses whether to use the current value or this alternate one based on the hour.