Calculating deltas in time series with duplicate & missing values - sql

I have an Oracle table that consist of tuples of logtime/value1, value2..., plus additional columns such as a metering point id. The values are sampled values of different counters that are each monotonically increasing, i.e. a newer value cannot be less than an older value. However, values can remain equal for several samplings, and values can miss sometimes, so the corresponding table entry is NULL while other values of the same logtime are valid. Also, the intervals between logtimes are not constant.
In the following, for simplicity I will regard only the logtime and one counter value.
I have to calculate the deltas from each logtime to the previous one. Using the method described in another question here gives two NULL deltas for each NULL value because two subtractions are invalid. A second solution fails when consecutive values are identical since the difference to the previous value is calculated twice.
Another solution is to construct a derived table/view with those NULL values replaced by the latest older valid value. My approach looks like this:
SELECT A.logtime, A.val,
(A.val - (SELECT MAX(C.val)
FROM tab C
WHERE logtime =
(SELECT MAX(B.logtime)
FROM tab B
WHERE B.logtime < A.logtime AND B.val IS NOT NULL))) AS delta
FROM tab A;
I suspect that this will result in a quite inefficient query, especially when doing this for all N counters in the table which will result in (1 + 2*N) SELECTs. It also does not take advantage from the fact that the counter is monotonically increasing.
Are there any alternative approaches? I'd think others have similar problems, too.
An obvious solution would of course be filling in those NULL values constructing a new table or modifying the existing table, but unfortunately that is not possible in this case. Avoiding/eliminating them on entry isn't possible either.
Any help would be greatly appreciated.

select
logtime,
val,
last_value(val ignore nulls) over (order by logtime)
as not_null_val,
last_value(val ignore nulls) over (order by logtime) -
last_value(val ignore nulls) over (order by logtime rows between unbounded preceding and 1 preceding)
as delta
from your_tab order by logtime;

I found a way to avoid the nested SELECT statements using Oracle SQL's build-in LAG function:
SELECT logtime, val,
NVL(val-LAG(val IGNORE NULLS) OVER (ORDER BY logtime), 0) AS delta
FROM tab;
seems to work as I intended.
(Repeated here as a separate answer)

Related

Replace first and last row having null values or missing values with previous/next available value in Postgresql12

I am a newbiew to postgresql.
I want to replace my first and last row of table,T which has null or missing values, with next/previous available values. Also, if there are missing values in the middle, it should be replaced with previous available value. For example:
id value EXPECTED
1 1
2 1 1
3 2 2
4 2
5 3 3
6 3
I am aware that there are many similar threads, but none seems to address this problem where the start and end also have missing values (including some missing in the middle rows). Also some of the concepts such as first_row ,partition by, top 1(which does not work for postgres) are very hard to grasp as a newbie.
So far i have referred to the following threads: value from previous row and Previous available value
Could someone kindly direct me in the right direction to address this problem?
Thank you
Unfortunately, Postgres doesn't have the ignore nulls option on lead() and lag(). In your example, you only need to borrow from the next row. So:
select t.*,
coalesce(value, lag(value) over (order by id), lead(value) over (order by id)) as expected
from t;
If you had multiple NULLs in a row, then this is trickier. One solution is to define "groups" based on when a value starts or stops. You can do this with a cumulative count of the values -- ascending and descending:
select t.*,
coalesce(value,
max(value) over (partition by grp_before),
max(value) over (partition by grp_after)
) as expected
from (select t.*,
count(value) over (order by id asc) as grp_before,
count(value) over (order by id desc) as grp_after
from t
) t;
Here is a db<>fiddle.

teradata sql problem: how to calculate the time difference in different columns with previous row order by another column?

It may sound not a new question here. But it is a little tricky here....
I want to apply for a similar sql like this below in teradata...
sel (col2- LAG(col1, 1)) minute OVER (ORDER BY session_id)
from data
I want to calculate the time difference by minutes between col1 and col2 ordered by session_id. So there are three columns here...
Thank you in advance.
I think the syntax you want is:
select (col2- LAG(col1) OVER (ORDER BY session_id)) day(4) to minute
from data
Note that 1 is not necessary; it is the default for LAG().

Why Window Functions Require My Aggregated Column in Group

I have been working with window functions a fair amount but I don't think I understand enough about how they work to answer why they behave the way they do.
For the query that I was working on (below), why am I required to take my aggregated field and add it to the group by? (In the second half of my query below I am unable to produce a result if I don't include "Events" in my second group by)
With Data as (
Select
CohortDate as month
,datediff(week,CohortDate,EventDate) as EventAge
,count(distinct case when EventDate is not null then GUID end) as Events
From MyTable
where month >= [getdate():month] - interval '12 months'
group by 1, 2
order by 1, 2
)
Select
month
,EventAge
,sum(Events) over (partition by month order by SubAge asc rows between unbounded preceding and current row) as TotEvents
from data
group by 1, 2, Events
order by 1, 2
I have run into this enough that I have just taken it for granted, but would really love some more color as to why this is needed. Is there a way I should be formatting these differently in order to avoid this (somewhat non-intuitive) requirement?
Thanks a ton!
What you are looking for is presumably a cumulative sum. That would be:
select month, EventAge,
sum(sum(Events)) over (partition by month
order by SubAge asc
rows between unbounded preceding and current row
) as TotEvents
from data
group by 1, 2
order by 1, 2 ;
Why? That might be a little hard to explain. Perhaps if you see the equivalent version with a subquery it will be clearer:
select me.*
sum(sum_events) over (partition by month
order by SubAge asc
rows between unbounded preceding and current row
) as TotEvents
from (select month, EventAge, sum(events) as sum_events
from data
group by 1, 2
) me
order by 1, 2 ;
This is pretty much an exactly shorthand for the query. The window function is evaluated after aggregation. You want to sum the SUM of the events after the aggregation. Hence, you need sum(sum(events)). After the aggregation, events is no longer available.
The nesting of aggregation functions is awkward at first -- at least it was for me. When I first started using window functions, I think I first spent a few days writing aggregation queries using subqueries and then rewriting without the subqueries. Quickly, I got used to writing them without subqueries.

postgres select aggregate timespans

I have a table with the following structure:
timstamp-start, timestamp-stop
1,5
6,10
25,30
31,35
...
i am only interested in continuous timespans e.g. the break between a timestamp-end and the following timestamp-start is less than 3.
How could I get the aggregated covered timespans as a result:
timestamp-start,timestamp-stop
1,10
25,35
The reason I am considering this is because a user may request a timespan that would need to return several thousand rows. However, most records are continous and using above method could potentially reduce many thousand of rows down to just a dozen. Or is the added computation not worth the savings in bandwith and latency?
You can group the time stamps in three steps:
Add a flag to determine where a new period starts (that is, a gap greater than 3).
Cumulatively sum the flag to assign groupings.
Re-aggregate with the new groupings.
The code looks like:
select min(ts_start) as ts_start, max(ts_end) as ts_end
from (select t.*,
sum(flag) over (order by ts_start) as grouping
from (select t.*,
(coalesce(ts_start - lag(ts_end) over (order by ts_start),0) > 3)::int as flag
from t
) t
) t
group by grouping;

SQL query to identify 0 AFTER a 1

Let's say I have two columns: Date and Indicator
Usually the indicator goes from 0 to 1 (when the data is sorted by date) and I want to be able to identify if it goes from 1 to 0 instead. Is there an easy way to do this with SQL?
I am already aggregating other fields in the same table. If I can add this to as another aggregation (e.g. without using a separate "where" statement or passing over the data a second time) it would be pretty awesome.
This is the phenomena I want to catch:
Date Indicator
1/5/01 0
1/4/01 0
1/3/01 1
1/2/01 1
1/1/01 0
This isn't a teradata-specific answer, but this can be done in normal SQL.
Assuming that the sequence is already 'complete' and xn+1 can be derived from xn, such as when the dates are sequential and all present:
SELECT date -- the 1 on the day following the 0
FROM r curr
JOIN r prev
-- join each day with the previous day
ON curr.date = dateadd(d, 1, prev.date)
WHERE curr.indicator = 1
AND prev.indicator = 0
YMMV on the ability of such a query to use indexes efficiently.
If the sequence is not complete the same can be applied after making a delegate sequence which is well ordered and similarly 'complete'.
This can also be done using correlated subqueries, each selecting the indicator of the 'previous max', but.. uhg.
Joining the table against it self it quite generic, but most SQL Dialects now support Analytical Functions. Ideally you could use LAG() but TeraData seems to try to support the absolute minimum of these, and so so they point you to use SUM() combined with rows preceding.
In any regard, this method avoids a potentially costly join and effectively deals with gaps in the data, whilst making maximum use of indexes.
SELECT
*
FROM
yourTable t
QUALIFY
t.indicator
<
SUM(t.indicator) OVER (PARTITION BY t.somecolumn /* optional */
ORDER BY t.Date
ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING
)
QUALIFY is a bit TeraData specific, but slightly tidier than the alternative...
SELECT
*
FROM
(
SELECT
*,
SUM(t.indicator) OVER (PARTITION BY t.somecolumn /* optional */
ORDER BY t.Date
ROWS BETWEEN 1 PRECEDING AND 1 PRECEDING
)
AS previous_indicator
FROM
yourTable t
)
lagged
WHERE
lagged.indicator < lagged.previous_indicator
Supposing you mean that you want to determine whether any row having 1 as its indicator value has an earlier Date than a row in its group having 0 as its indicator value, you can identify groups with that characteristic by including the appropriate extreme dates in your aggregate results:
SELECT
...
MAX(CASE indicator WHEN 0 THEN Date END) AS last_ind_0,
MIN(CASE indicator WHEN 1 THEN Date END) AS first_ind_1,
...
You then test whether first_ind_1 is less than last_ind_0, either in code or as another selection item.