Postgres query for calendar - sql

I am trying to write a query to retrieve data from an events query for a simple calendar app.
The table structure is as followed:
table name: events
Column | Type
---------+-----------
id | integer
start | timestamp
end | timestamp
the data inside of the table
id| start | end
--+---------------------+--------------------
1 | 2017-09-01 12:00:00 | 2017-09-01 12:00:00
2 | 2017-09-03 10:00:00 | 2017-09-03 12:00:00
3 | 2017-09-08 12:00:00 | 2017-09-11 12:00:00
4 | 2017-09-11 12:00:00 | 2017-09-11 12:00:00
the expected result is
date | event.id
-----------+---------
2017-09-01 | 1
2017-09-03 | 2
2017-09-08 | 3
2017-09-09 | 3
2017-09-10 | 3
2017-09-11 | 3
2017-09-11 | 4
As you can see, only days with an event (not just start and end, but also the days in between) is retrieved, days without an event are not retrieved at all.
In the second step I would like to be able to limit the amount of distinct days, e.g. "get 4 days with events" what might be more than 4 rows.
Right now I am able to retrieve the events based on start date only using the following query:
SELECT start::date, id FROM events WHERE events.start::date >= '2017-09-01' LIMIT 3
Thinks I already though about are DENSE_RANK and generate_series, but up to now I didn't find a way to fill the gaps between start and end, but not on days where there are no data.
So in short:
What I want to get is: get the next X days where there is an event. A date with an event is a day where start <= date >= end
Any ideas ?
Edit
Thanks to Tim I have now the following query (modified to use generate_series instead of a table and added a limit using dense_rank):
select date, id FROM (
SELECT
DENSE_RANK() OVER (ORDER BY t1.date) as rank,
t1.date,
events.id
FROM
generate_series([DATE]::date, [DATE]::date + interval '365 day', '1 day') as t1
INNER JOIN
events
ON t1.date BETWEEN events.start::date AND events."end"::date
) as t
WHERE rank <= [LIMIT]
This is working really good, even though I am not 100% sure about the performance hit with this kind of limit

I think you really need a calendar table here to cover the full range of dates in which your data may appear. In the first CTE below, I generate a table covering the month of September 2017. Then all we need to do is inner join this calendar table with the events table on the criteria of a given day appearing within a given range.
WITH cte AS (
SELECT CAST('2017-09-01' AS DATE) + (n || ' day')::INTERVAL AS date
FROM generate_series(0, 29) n
)
SELECT
t1.date,
t2.id
FROM cte t1
INNER JOIN events t2
ON t1.date BETWEEN CAST(t2.start AS DATE) AND CAST(t2.end AS DATE);
Output:
date id
1 01.09.2017 00:00:00 1
2 03.09.2017 00:00:00 2
3 08.09.2017 00:00:00 3
4 09.09.2017 00:00:00 3
5 10.09.2017 00:00:00 3
6 11.09.2017 00:00:00 3
7 11.09.2017 00:00:00 4
Demo here:
Rextester

Related

Retrieving 52 weeks after the result of a subquery

From a table that contains sales, I retrieved the last week of that table. That gives me the last week where there are sales being made. 'Date' is always the first day of the month but it doesn't matter, the real important data is week and partial_week.
The result is simple :
+------------+---------+--------------+
| Date | Week | Partial_week |
+------------+---------+--------------+
| 2020-02-01 | 2020-09 | 2020M02W09 |
+------------+---------+--------------+
Let's call it t1
I have a table with the first day of each month, every week and partial week from 2015 to 2025
(when a week is on two months, it's split in two partial weeks that have the same number but different month). It looks like this :
+------------+---------+--------------+
| Date | Week | Partial_week |
+------------+---------+--------------+
| 2020-02-01 | 2020-05 | 2020M02W05 |
| 2020-02-01 | 2020-06 | 2020M02W06 |
| 2020-02-01 | 2020-07 | 2020M02W07 |
| 2020-02-01 | 2020-08 | 2020M02W08 |
| 2020-02-01 | 2020-09 | 2020M02W09 |
| 2020-03-01 | 2020-09 | 2020M03W09 |
+------------+---------+--------------+
Let's call it t2
I now need to retrieve everything in t2 that is between 1 et 52 weeks after my week retrieved in t1. (this should get me every weeks and partial weeks until 2021-09 or so).
I tought about having a
'select top 52 distinct week from t2'
joining on t1 and having a where clause 'where t1.week < t2.week'
then joining everything on t2 again to get every partial week too,
but that doesn't work because on every week t1.week is equal to null (I wish t1.week could just be a variable since it only has one row...)
Any ideas would be appreciated.
Your logic seems to be close. Put the initial query in a Scalar Subquery to handle it like a variable:
select *
from t2
where t2.week >=
( select week from t1 -- i.e. your existing query to return the latest week
)
qualify
dense_rank()
over (order by week) <= 52
You can also switch to a join:
select *
from t2
join
( select week from t1 -- i.e. your existing query to return the latest week
) as t1
on t2.week >= t1.week
qualify
dense_rank() -- next 52 week & partial weeks
over (order by t2.week) <= 52
Explain of the Scalar Subquery might be better.

How to write a SQL statement to sum data using group by the same day of every two neighboring months

I have a data table like this:
datetime data
-----------------------
...
2017/8/24 6.0
2017/8/25 5.0
...
2017/9/24 6.0
2017/9/25 6.2
...
2017/10/24 8.1
2017/10/25 8.2
I want to write a SQL statement to sum the data using group by the 24th of every two neighboring months in certain range of time such as : from 2017/7/20 to 2017/10/25 as above.
How to write this SQL statement? I'm using SQL Server 2008 R2.
The expected results table is like this:
datetime_range data_sum
------------------------------------
...
2017/8/24~2017/9/24 100.9
2017/9/24~2017/10/24 120.2
...
One conceptual way to proceed here is to redefine a "month" as ending on the 24th of each normal month. Using the SQL Server month function, we will assign any date occurring after the 24th as belonging to the next month. Then we can aggregate by the year along with this shifted month to obtain the sum of data.
WITH cte AS (
SELECT
data,
YEAR(datetime) AS year,
CASE WHEN DAY(datetime) > 24
THEN MONTH(datetime) + 1 ELSE MONTH(datetime) END AS month
FROM yourTable
)
SELECT
CONVERT(varchar(4), year) + '/' + CONVERT(varchar(2), month) +
'/25~' +
CONVERT(varchar(4), year) + '/' + CONVERT(varchar(2), (month + 1)) +
'/24' AS datetime_range,
SUM(data) AS data_sum
FROM cte
GROUP BY
year, month;
Note that your suggested ranges seem to include the 24th on both ends, which does not make sense from an accounting point of view. I assume that the month includes and ends on the 24th (i.e. the 25th is the first day of the next accounting period.
Demo
I would suggest dynamically building some date range rows so that you can then join you data to those for aggregation, like this example:
+----+---------------------+---------------------+----------------+
| | period_start_dt | period_end_dt | your_data_here |
+----+---------------------+---------------------+----------------+
| 1 | 24.04.2017 00:00:00 | 24.05.2017 00:00:00 | 1 |
| 2 | 24.05.2017 00:00:00 | 24.06.2017 00:00:00 | 1 |
| 3 | 24.06.2017 00:00:00 | 24.07.2017 00:00:00 | 1 |
| 4 | 24.07.2017 00:00:00 | 24.08.2017 00:00:00 | 1 |
| 5 | 24.08.2017 00:00:00 | 24.09.2017 00:00:00 | 1 |
| 6 | 24.09.2017 00:00:00 | 24.10.2017 00:00:00 | 1 |
| 7 | 24.10.2017 00:00:00 | 24.11.2017 00:00:00 | 1 |
| 8 | 24.11.2017 00:00:00 | 24.12.2017 00:00:00 | 1 |
| 9 | 24.12.2017 00:00:00 | 24.01.2018 00:00:00 | 1 |
| 10 | 24.01.2018 00:00:00 | 24.02.2018 00:00:00 | 1 |
| 11 | 24.02.2018 00:00:00 | 24.03.2018 00:00:00 | 1 |
| 12 | 24.03.2018 00:00:00 | 24.04.2018 00:00:00 | 1 |
+----+---------------------+---------------------+----------------+
DEMO
declare #start_dt date;
set #start_dt = '20170424';
select
period_start_dt, period_end_dt, sum(1) as your_data_here
from (
select
dateadd(month,m.n,start_dt) period_start_dt
, dateadd(month,m.n+1,start_dt) period_end_dt
from (
select #start_dt start_dt ) seed
cross join (
select 0 n union all
select 1 union all
select 2 union all
select 3 union all
select 4 union all
select 5 union all
select 6 union all
select 7 union all
select 8 union all
select 9 union all
select 10 union all
select 11
) m
) r
-- LEFT JOIN YOUR DATA
-- ON yourdata.date >= r.period_start_dt and data.date < r.period_end_dt
group by
period_start_dt, period_end_dt
Please don't be tempted to use "between" when it comes to joining to your data. Follow the note above and use yourdata.date >= r.period_start_dt and data.date < r.period_end_dt otherwise you could double count information as between is inclusive of both lower and upper boundaries.
I think the simplest way is to subtract 25 days and aggregate by the month:
select year(dateadd(day, -25, datetime)) as yr,
month(dateadd(day, -25, datetime)) as mon,
sum(data)
from t
group by dateadd(day, -25, datetime);
You can format yr and mon to get the dates for the specific ranges, but this does the aggregation (and the yr/mon columns might be sufficient).
Step 0: Build a calendar table. Every database needs a calendar table eventually to simplify this sort of calculation.
In this table you may have columns such as:
Date (primary key)
Day
Month
Year
Quarter
Half-year (e.g. 1 or 2)
Day of year (1 to 366)
Day of week (numeric or text)
Is weekend (seems redundant now, but is a huge time saver later on)
Fiscal quarter/year (if your company's fiscal year doesn't start on Jan. 1)
Is Holiday
etc.
If your company starts its month on the 24th, then you can add a "Fiscal Month" column that represents that.
Step 1: Join on the calendar table
Step 2: Group by the columns in the calendar table.
Calendar tables sound weird at first, but once you realize that they are in fact tiny even if they span a couple hundred years they quickly become a major asset.
Don't try to cheap out on disk space by using computed columns. You want real columns because they are much faster and can be indexed if necessary. (Though honestly, usually just the PK index is enough for even wide calendar tables.)

How to identify MIN value for records within a rolling date range in SQL

I am trying to calculate a MIN date by Patient_ID for each record in my dataset that dynamically references the last 30 days from the date (Discharge_Dt) on that row. My initial thought was to use a window function, but I opted for a subquery, which is close, but not quite what I need.
Please note, my sample query is also missing logic that limits the MIN Discharge_Dt to the last 30 days, in other words, I do not want a MIN Discharge_Dt that is older than 30 days for any given row.
Sample Query:
SELECT Patient_ID,
Discharge_Dt,
/* Calculating the MIN Discharge_Dt by Patient_ID for the last 30
days based upon the Discharge_Dt for that row */
(SELECT MIN(Discharge_Dt)
FROM admissions_ds AS b
WHERE a.Patient_ID = b.Patient_ID AND
a.Discharge_Dt >= DATEADD('D', -30, GETDATE())) AS MIN_Dt
FROM admissions_ds AS a
Desired Output Table:
Patient_ID | Discharge_Dt | MIN_Dt
10 | 2017-08-15 | 2017-08-15
10 | 2017-08-31 | 2017-08-15
10 | 2017-09-21 | 2017-08-31
15 | 2017-07-01 | 2017-07-01
15 | 2017-07-18 | 2017-07-01
20 | 2017-05-05 | 2017-05-05
25 | 2017-09-24 | 2017-09-24
Here you go,
Just a simple join required.
drop TABLE if EXISTS admissions_ds;
create table admissions_ds (Patient_ID int,Discharge_Dt date);
insert into admissions_ds
values
(10,'2017-08-15'),
(10,'2017-08-31'),
(10,'2017-09-21'),
(15,'2017-07-01'),
(15,'2017-07-18'),
(20,'2017-05-05'),
(25,'2017-09-24');
select t1.Patient_ID,t1.Discharge_Dt,min(t2.Discharge_Dt) as min_dt
from admissions_ds as t1
join admissions_ds as t2 on t1.Patient_ID=t2.Patient_ID and t2.Discharge_Dt > t1.Discharge_Dt - interval '30 days'
group by 1,2
order by 1,2
;

Filling Out & Filtering Irregular Time Series Data

Using Postgresql 9.4, I am trying to craft a query on time series log data that logs new values whenever the value updates (not on a schedule). The log can update anywhere from several times a minute to once a day.
I need the query to accomplish the following:
Filter too much data by just selecting the first entry for the timestamp range
Fill in sparse data by using the last reading for the log value. For example, if I am grouping the data by hour and there was an entry at 8am with a log value of 10. Then the next entry isn't until 11am with a log value of 15, I would want the query to return something like this:
Timestamp | Value
2015-07-01 08:00 | 10
2015-07-01 09:00 | 10
2015-07-01 10:00 | 10
2015-07-01 11:00 | 15
I have got a query that accomplishes the first of these goals:
with time_range as (
select hour
from generate_series('2015-07-01 00:00'::timestamp, '2015-07-02 00:00'::timestamp, '1 hour') as hour
),
ranked_logs as (
select
date_trunc('hour', time_stamp) as log_hour,
log_val,
rank() over (partition by date_trunc('hour', time_stamp) order by time_stamp asc)
from time_series
)
select
time_range.hour,
ranked_logs.log_val
from time_range
left outer join ranked_logs on ranked_logs.log_hour = time_range.hour and ranked_logs.rank = 1;
But I can't figure out how to fill in the nulls where there is no value. I tried using the lag() feature of Postgresql's Window functions, but it didn't work when there were multiple nulls in a row.
Here's a SQLFiddle that demonstrates the issue:
http://sqlfiddle.com/#!15/f4d13/5/0
your columns are log_hour and first_vlue
with time_range as (
select hour
from generate_series('2015-07-01 00:00'::timestamp, '2015-07-02 00:00'::timestamp, '1 hour') as hour
),
ranked_logs as (
select
date_trunc('hour', time_stamp) as log_hour,
log_val,
rank() over (partition by date_trunc('hour', time_stamp) order by time_stamp asc)
from time_series
),
base as (
select
time_range.hour lh,
ranked_logs.log_val
from time_range
left outer join ranked_logs on ranked_logs.log_hour = time_range.hour and ranked_logs.rank = 1)
SELECT
log_hour, log_val, value_partition, first_value(log_val) over (partition by value_partition order by log_hour)
FROM (
SELECT
date_trunc('hour', base.lh) as log_hour,
log_val,
sum(case when log_val is null then 0 else 1 end) over (order by base.lh) as value_partition
FROM base) as q
UPDATE
this is what your query return
Timestamp | Value
2015-07-01 01:00 | 10
2015-07-01 02:00 | null
2015-07-01 03:00 | null
2015-07-01 04:00 | 15
2015-07-01 05:00 | nul
2015-07-01 06:00 | 19
2015-07-01 08:00 | 13
I want this result set to be split in groups like this
2015-07-01 01:00 | 10
2015-07-01 02:00 | null
2015-07-01 03:00 | null
2015-07-01 04:00 | 15
2015-07-01 05:00 | nul
2015-07-01 06:00 | 19
2015-07-01 08:00 | 13
and to assign to every row in a group the value of first row from that group (done by last select)
In this case, a method for obtaining the grouping is to create a column which holds the number of
not null values counted until current row and split by this value. (use of sum(case))
value | sum(case)
| 10 | 1 |
| null | 1 |
| null | 1 |
| 15 | 2 | <-- new not null, increment
| nul | 2 |
| 19 | 3 | <-- new not null, increment
| 13 | 4 | <-- new not null, increment
and now I can partion by sum(case)

Finding correlated values from second table without resorting to PL/SQL

I have the following two tables in my database:
a) A table containing values acquired at a certain date (you may think of these as, say, temperature readings):
sensor_id | acquired | value
----------+---------------------+--------
1 | 2009-04-01 10:00:00 | 20
1 | 2009-04-01 10:01:00 | 21
1 | 2009-04 01 10:02:00 | 20
1 | 2009-04 01 10:09:00 | 20
1 | 2009-04 01 10:11:00 | 25
1 | 2009-04 01 10:15:00 | 30
...
The interval between the readings may differ, but the combination of (sensor_id, acquired) is unique.
b) A second table containing time periods and a description (you may think of these as, say, periods when someone turned on the radiator):
sensor_id | start_date | end_date | description
----------+---------------------+---------------------+------------------
1 | 2009-04-01 10:00:00 | 2009-04-01 10:02:00 | some description
1 | 2009-04-01 10:10:00 | 2009-04-01 10:14:00 | something else
Again, the length of the period may differ, but there will never be overlapping time periods for any given sensor.
I want to get a result that looks like this for any sensor and any date range:
sensor id | start date | v1 | end date | v2 | description
----------+---------------------+----+---------------------+----+------------------
1 | 2009-04-01 10:00:00 | 20 | 2009-04-01 10:02:00 | 20 | some description
1 | 2009-04-01 10:10:00 | 25 | 2009-04-01 10:14:00 | 30 | some description
Or in text from: given a sensor_id and a date range of range_start and range_end,
find me all time periods which have overlap with the date range (that is, start_date < range_end and end_date > range_start) and for each of these rows, find the corresponding values from the value table for the time period's start_date and end_date (find the first row with acquired > start_date and acquired > end_date).
If it wasn't for the start_value and end_value columns, this would be a textbook trivial example of how to join two tables.
Can I somehow get the output I need in one SQL statement without resorting to writing a PL/SQL function to find these values?
Unless I have overlooked something blatantly obvious, this can't be done with simple subselects.
Database is Oracle 11g, so any Oracle-specific features are acceptable.
Edit: yes, looping is possible, but I want to know if this can be done with a single SQL select.
You can give this a try. Note the caveats at the end though.
SELECT
RNG.sensor_id,
RNG.start_date,
RDG1.value AS v1,
RNG.end_date,
RDG2.value AS v2,
RNG.description
FROM
Ranges RNG
INNER JOIN Readings RDG1 ON
RDG1.sensor_id = RNG.sensor_id AND
RDG1.acquired => RNG.start_date
LEFT OUTER JOIN Readings RDG1_NE ON
RDG1_NE.sensor_id = RDG1.sensor_id AND
RDG1_NE.acquired >= RNG.start_date AND
RDG1_NE.acquired < RDG1.acquired
INNER JOIN Readings RDG2 ON
RDG2.sensor_id = RNG.sensor_id AND
RDG2.acquired => RNG.end_date
LEFT OUTER JOIN Readings RDG1_NE ON
RDG2_NE.sensor_id = RDG2.sensor_id AND
RDG2_NE.acquired >= RNG.end_date AND
RDG2_NE.acquired < RDG2.acquired
WHERE
RDG1_NE.sensor_id IS NULL AND
RDG2_NE.sensor_id IS NULL
This uses the first reading after the start date of the range and the first reading after the end date (personally, I'd think using the last date before the start and end would make more sense or the closest value, but I don't know your application). If there is no such reading then you won't get anything at all. You can change the INNER JOINs to OUTER and put additional logic in to handle those situations based on your own business rules.
It seems pretty straight forward.
Find the sensor values for each range. Find a row - I will call acquired of this row just X - where X > start_date and not exists any other row with acquired > start_date and acquired < X. Do the same for end date.
Select only the ranges that meet the query - start_date before and end_date after the dates supplied by the query.
In SQL this would be something like that.
SELECT R1.*, SV1.aquired, SV2.aquired
FROM ranges R1
INNER JOIN sensor_values SV1 ON SV1.sensor_id = R1.sensor_id
INNER JOIN sensor_values SV2 ON SV2.sensor_id = R1.sensor_id
WHERE SV1.aquired > R1.start_date
AND NOT EXISTS (
SELECT *
FROM sensor_values SV3
WHERE SV3.aquired > R1.start_date
AND SV3.aquired < SV1.aquired)
AND SV2.aquired > R1.end_date
AND NOT EXISTS (
SELECT *
FROM sensor_values SV4
WHERE SV4.aquired > R1.end_date
AND SV4.aquired < SV2.aquired)
AND R1.start_date < #range_start
AND R1.end_date > #range_end