How to identify rows per group before a certain value gap? - sql

I'd like to update a certain column in a table based on the difference in a another column value between neighboring rows in PostgreSQL.
Here is a test setup:
CREATE TABLE test(
main INTEGER,
sub_id INTEGER,
value_t INTEGER);
INSERT INTO test (main, sub_id, value_t)
VALUES
(1,1,8),
(1,2,7),
(1,3,3),
(1,4,85),
(1,5,40),
(2,1,3),
(2,2,1),
(2,3,1),
(2,4,8),
(2,5,41);
My goal is to determine in each group main starting from sub_id 1 which value in diff exceeds a certain threshold (e.g. <10 or >-10) by checking in ascending order by sub_id. Until the threshold is reached I would like to flag every passed row AND the one row where the condition is FALSE by filling column newval with a value e.g. 1.
Should I use a loop or are there smarter solutions?
The task description in pseudocode:
FOR i in GROUP [PARTITION BY main ORDER BY sub_id]:
DO until diff > 10 OR diff <-10
SET newval = 1 AND LEAD(newval) = 1

Basic SELECT
As fast as possible:
SELECT *, bool_and(diff BETWEEN -10 AND 10) OVER (PARTITION BY main ORDER BY sub_id) AS flag
FROM (
SELECT *, value_t - lag(value_t, 1, value_t) OVER (PARTITION BY main ORDER BY sub_id) AS diff
FROM test
) sub;
Fine points
Your thought model evolves around the window function lead(). But its counterpart lag() is a bit more efficient for the purpose, since there is no off-by-one error when including the row before the big gap. Alternatively, use lead() with inverted sort order (ORDER BY sub_id DESC).
To avoid NULL for the first row in the partition, provide value_t as default as 3rd parameter, which makes the diff 0 instead of NULL. Both lead() and lag() have that capability.
diff BETWEEN -10 AND 10 is slightly faster than #diff < 11 (clearer and more flexible, too). (# being the "absolute value" operator, equivalent to the abs() function.)
bool_or() or bool_and() in the outer window function is probably cheapest to mark all rows up to the big gap.
Your UPDATE
Until the threshold is reached I would like to flag every passed row AND the one row where the condition is FALSE by filling column newval with a value e.g. 1.
Again, as fast as possible.
UPDATE test AS t
SET newval = 1
FROM (
SELECT main, sub_id
, bool_and(diff BETWEEN -10 AND 10) OVER (PARTITION BY main ORDER BY sub_id) AS flag
FROM (
SELECT main, sub_id
, value_t - lag(value_t, 1, value_t) OVER (PARTITION BY main ORDER BY sub_id) AS diff
FROM test
) sub
) u
WHERE (t.main, t.sub_id) = (u.main, u.sub_id)
AND u.flag;
Fine points
Computing all values in a single query is typically substantially faster than a correlated subquery.
The added WHERE condition AND u.flag makes sure we only update rows that actually need an update.
If some of the rows may already have the right value in newval, add another clause to avoid those empty updates, too: AND t.newval IS DISTINCT FROM 1
See:
How do I (or can I) SELECT DISTINCT on multiple columns?
SET newval = 1 assigns a constant (even though we could use the actually calculated value in this case), that's a bit cheaper.
db<>fiddle here

Your question was hard to comprehend, the "value_t" column was irrelevant to the question, and you forgot to define the "diff" column in your SQL.
Anyhow, here's your solution:
WITH data AS (
SELECT main, sub_id, value_t
, abs(value_t
- lead(value_t) OVER (PARTITION BY main ORDER BY sub_id)) > 10 is_evil
FROM test
)
SELECT main, sub_id, value_t
, CASE max(is_evil::int)
OVER (PARTITION BY main ORDER BY sub_id
ROWS BETWEEN UNBOUNDED PRECEDING AND 1 PRECEDING)
WHEN 1 THEN NULL ELSE 1 END newval
FROM data;
I'm using a CTE to prepare the data (computing whether a row is "evil"), and then the "max" window function is used to check if there were any "evil" rows before the current one, per partition.

EXISTS on an aggregating subquery:
UPDATE test u
SET value_t = NULL
WHERE EXISTS (
SELECT * FROM (
SELECT main,sub_id
, value_t , ABS(value_t - lag(value_t)
OVER (PARTITION BY main ORDER BY sub_id) ) AS absdiff
FROM test
) x
WHERE x.main = u.main
AND x.sub_id <= u.sub_id
AND x.absdiff >= 10
)
;
SELECT * FROM test
ORDER BY main, sub_id;
Result:
UPDATE 3
main | sub_id | value_t
------+--------+---------
1 | 1 | 8
1 | 2 | 7
1 | 3 | 3
1 | 4 |
1 | 5 |
2 | 1 | 3
2 | 2 | 1
2 | 3 | 1
2 | 4 | 8
2 | 5 |
(10 rows)

Related

Snowflake: Repeating rows based on column value

How to repeat rows based on column value in snowflake using sql.
I tried a few methods but not working such as dual and connect by.
I have two columns: Id and Quantity.
For each ID, there are different values of Quantity.
So if you have a count, you can use a generator:
with ten_rows as (
select row_number() over (order by null) as rn
from table(generator(ROWCOUNT=>10))
), data(id, count) as (
select * from values
(1,2),
(2,4)
)
SELECT
d.*
,r.rn
from data as d
join ten_rows as r
on d.count >= r.rn
order by 1,3;
ID
COUNT
RN
1
2
1
1
2
2
2
4
1
2
4
2
2
4
3
2
4
4
Ok let's start by generating some data. We will create 10 rows, with a QTY. The QTY will be randomly chosen as 1 or 2.
Next we want to duplicate the rows with a QTY of 2 and leave the QTY =1 as they are.
Obviously you can change all parameters above to suit your needs - this solution works super fast and in my opinion way better than table generation.
Simply stack SPLIT_TO_TABLE(), REPEAT() with a LATERAL() join and voila.
WITH TEN_ROWS AS (SELECT ROW_NUMBER()OVER(ORDER BY NULL)SOME_ID,UNIFORM(1,2,RANDOM())QTY FROM TABLE(GENERATOR(ROWCOUNT=>10)))
SELECT
TEN_ROWS.*
FROM
TEN_ROWS,LATERAL SPLIT_TO_TABLE(REPEAT('hire me $10/hour',QTY-1),'hire me $10/hour')ALTERNATIVE_APPROACH;

Moving average within groups that returns NULL if any row is NULL (Snowflake - SQL)

I need to caluclate the moving average of a column per group (partitioned by id). The only twist is that I need the result to be NULL if any value in the corresponding window is NULL.
Example of expected behaviour (for a given id and window size=3):
A
mov_ave_A
NULL
NULL
1
NULL
1
NULL
1
1
4
2
The first 3 rows of the moving average are NULL, because the first value (which is included in the first 3 windows) is NULL. Row 4 of mov_ave_A is equal to 1 because it's the average of rows 2 to 4 of A, and so on.
I tried:
CASE WHEN SUM(CASE WHEN a IS NULL THEN 1 ELSE 0 END) = 0 THEN AVG(a) ELSE NULL END
OVER (
PARTITION BY id
ORDER BY date_month
ROWS BETWEEN 2 PRECEDING AND CURRENT ROW) AS mov_ave_A
but I get
"Sliding window frame unsupported for function CASE".
Also, I'd really like the solution to be short and simple as I need to create 6 such columns. So, I'll have to repeat the logic 6 times.
The issue with your query is the OVER clause is after the END. I believe this should work. You need to have the OVER clause for each window function so once for COUNT and once for AVG. COUNT is a easier to way to check for NULL's then using SUM
SELECT
*
,CASE
/*Check for 3 values in a, if so return the rolling AVG value. Implicit ELSE NULL*/
WHEN COUNT(a) OVER (PARTITION BY ID ORDER BY date_month ROWS BETWEEN 2 PRECEDING AND CURRENT ROW) = 3
THEN AVG(a) OVER (PARTITION BY ID ORDER BY date_month ROWS BETWEEN 2 PRECEDING AND CURRENT ROW)
END AS mov_ave_A
FROM YourTable
Use the following case expression:
CASE WHEN COUNT(a) OVER (
PARTITION BY id
ORDER BY date_month
ROWS BETWEEN 2 PRECEDING AND CURRENT ROW
) = 3 THEN AVG(a) OVER (
PARTITION BY id
ORDER BY date_month
ROWS BETWEEN 2 PRECEDING AND CURRENT ROW
) END AS mov_avg

Insert in table such that it does not create consecutive duplicates H2

I need to create insert query for H2 db. I need to insert in table such that it do not create consecutive duplicate in table and ignore it. For example,
First name | Last name | Date
A | Z | 2018-12-02
B | Y | 2018-12-03
A | X | 2018-12-04
If I have to insert row `| A | W | 2018-12-01 in above table sorted in ascending order by date, it checks for consecutive duplicate in Firstname column. Since it create consecutive duplicate in table therefore it ignore it.
First name | Last name | Date
A | W | 2018-12-01
A | Z | 2018-12-02
B | Y | 2018-12-03
A | X | 2018-12-04
In H2 you can use the following SQL:
INSERT INTO tableName SELECT * FROM VALUES ('A', 'W', DATE '2018-12-01') T(F, L, D)
WHERE NOT EXISTS (
SELECT * FROM tableName
QUALIFY "First name" = F
AND DENSE_RANK() OVER (ORDER BY "Date")
- DENSE_RANK(D) WITHIN GROUP (ORDER BY "Date") OVER() IN (-1, 0)
);
Here the window version of hypothetical set DENSE_RANK function (don't mix it with window DENSE_RANK function, that's a different one) is used to determine the insert position of a new row:
DENSE_RANK(D) WITHIN GROUP (ORDER BY "Date") OVER()
This aggregate function is a part of the SQL Standard, but in the SQL Standard in may not be used as a window function, but H2 is less restrictive.
Then the plain DENSE_RANK window function is used to number existing rows in the table. The difference between number of a row and number of a hypothetical row is -1 for row before the inserted row and 0 for row after the inserted row.
We need to check only rows with the same "First name" value. So the whole filter criteria will be
"First name" = F
AND DENSE_RANK() OVER (ORDER BY "Date")
- DENSE_RANK(D) WITHIN GROUP (ORDER BY "Date") OVER() IN (-1, 0)
In the SQL Standard you can't filter results after evaluation of window functions, but H2 has non-standard QUALIFY clause (from Teradata) for that purpose, in other DBMS a subquery is needed.
The final condition to decide whether row may be inserted is
WHERE NOT EXISTS (
SELECT * FROM tableName
QUALIFY "First name" = F
AND DENSE_RANK() OVER (ORDER BY "Date")
- DENSE_RANK(D) WITHIN GROUP (ORDER BY "Date") OVER() (-1, 0)
);
If there are no such rows, insertion of a new row will not create two sequential rows with the same "First name" value.
This condition can be used in the plain standard insert from select command.
This solution is not expected to be fast if a table has many rows. A more efficient solution can use subqueries for lookups for previous and next rows such as NOT EXISTS SELECT * FROM (SELECT * FROM tableName WHERE "Date" > D ORDER BY "Date" FETCH FIRST ROW ONLY) UNION (SELECT * FROM tableName WHERE "Date" < D ORDER BY "Date" DESC FETCH FIRST ROW ONLY) WHERE "First name" = F, but H2 doesn't allow references to outer tables from deeply nested queries, so D here needs to be replaced with JDBC parameter (… VALUES (?1, ?2, ?3) … WHERE "Date" < ?3 …). You can try to build such command by yourself.

SQL group table by "leading rows" without pl/sql

I have this table (short example) with two columns
1 a
2 a
3 a3
4 a
5 a
6 a6
7 a
8 a8
9 a
and I would like to group/partition them into groups separated by those leading "a", ideally to add another column like this, so I can address those groups easily.
1 a 0
2 a 0
3 a3 3
4 a 3
5 a 3
6 a6 6
7 a 6
8 a8 8
9 a 8
problem is that setup of the table is dynamic so I can't use staticly lag or lead functions, any ideas how to do this without pl/sql in postgres version 9.5
Assuming the leading part is a single character. Hence the expression right(data, -1) works to extract the group name. Adapt to your actual prefix.
The solution uses two window functions, which can't be nested. So we need a subquery or a CTE.
SELECT id, data
, COALESCE(first_value(grp) OVER (PARTITION BY grp_nr ORDER BY id), '0') AS grp
FROM (
SELECT *, NULLIF(right(data, -1), '') AS grp
, count(NULLIF(right(data, -1), '')) OVER (ORDER BY id) AS grp_nr
FROM tbl
) sub;
Produces your desired result exactly.
NULLIF(right(data, -1), '') to get the effective group name or NULL if none.
count() only counts non-null values, so we get a higher count for every new group in the subquery.
In the outer query, we take the first grp value per grp_nr as group name and default to '0' with COALESCE for the first group without name (which has a NULL as group name so far).
We could use min() or max() as outer window function as well, since there is only one non-null value per partition anyway. first_value() is probably cheapest since the rows are sorted already.
Note the group name grp is data type text. You may want to cast to integer, if those are clean (and reliably) integer numbers.
This can be achieved by setting rows containing a to a specific value and all the other rows to a different value. Then use a cumulative sum to get the desired number for the rows. The group number is set to the next number when a new value in the val column is encountered and all the proceeding rows with a will have the same group number as the one before and this continues.
I assume that you would need a distinct number for each group and the number doesn't matter.
select id, val, sum(ex) over(order by id) cm_sum
from (select t.*
,case when val = 'a' then 0 else 1 end ex
from t) x
The result for the query above with the data in question, would be
id val cm_sum
--------------
1 a 0
2 a 0
3 a3 1
4 a 1
5 a 1
6 a6 2
7 a 2
8 a8 3
9 a 3
With the given data, you can use a cumulative max:
select . . .,
coalesce(max(substr(col2, 2)) over (order by col1), 0)
If you don't strictly want the maximum, then it gets a bit more difficult. The ANSI solution is to use the IGNORE NULLs option on LAG(). However, Postgres does not (yet) support that. An alternative is:
select . . ., coalesce(substr(reft.col2, 2), 0)
from (select . . .,
max(case when col2 like 'a_%' then col1 end) over (order by col1) as ref_col1
from t
) tt join
t reft
on tt.ref_col1 = reft.col1
You can also try this :
with mytable as (select split_part(t,' ',1)::integer id,split_part(t,' ',2) myvalue
from (select unnest(string_to_array($$1 a;2 a;3 a3;4 a;5 a;6 a6;7 a;8 a8;9 a$$,
';'))t) a)
select id,myvalue,myresult from mytable join (
select COALESCE(NULLIF(substr(myvalue,2),''),'0') myresult,idmin id_down
,COALESCE(lead(idmin) over (order by myvalue),999999999999) id_up
from (
select myvalue,min(id) idmin from mytable group by 1
) a) b
on id between id_down and id_up-1

How to efficiently get a value from the last row in bulk on SQL Server

I have a table like so
Id | Type | Value
--------------------
0 | Big | 2
1 | Big | 3
2 | Small | 3
3 | Small | 3
I would like to get a table like this
Type | Last Value
--------------------
Small | 3
Big | 3
How can I do this. I understand there is an SQL Server method called LAST_VALUE(...) OVER .(..) but I can't get this to work with GROUP BY.
I've also tried using SELECT MAX(ID) & SELECT TOP 1.. but this seems a bit inefficient since there would be a subquery for each value. The queries take too long when the table has a few million rows in it.
Is there a way to quickly get the last value for these, perhaps using LAST_VALUE?
You can do it using rownumber:
select
type,
value
from
(
select
type,
value,
rownumber() over (partition by type order by id desc) as RN
) TMP
where RN = 1
Can't test this now since SQL Fiddle doesn't seem to work, but hopefully that's ok.
The most efficient method might be not exists, which uses an anti-join for the underlying operator:
select type, value
from likeso l
where not exists (select 1 from likeso l2 where l2.type = l.type and l2.id > l.id)
For performance, you want an index on likeso(type, id).
I really wonder if there is more efficent solution but, I use following query on such needs;
Select Id, Type, Value
From ( Select *, Max (Id) Over (Partition By Type) As LastId
From #Table) T
Where Id = LastId