Subtraction of values depending on time SQL - sql

For each EVENT_TYPE that is repeated more than once
I need a SQL statement that returns the event_type and the subtraction of the last value registered for this event_type and the second value. I appreciate your help

You can use LEAD() (or LAG() if you prefer) to get the next record in the series, and calculate the difference only when there is another record and only taking the latest Time per Event_Type:
With Cte As
(
Select *,
Row_Number() Over (Partition By Event_Type Order By Time Desc) As Row_Number,
Lead(Value) Over (Partition By Event_Type Order By Time Desc) As Prev
From YourTable
)
Select Event_Type, Value - Prev As Value
From Cte
Where Prev Is Not Null
And Row_Number = 1

I would use row_number() and conditional aggregation:
select e.event_type,
sum(case when seqnum = 1 then value when seqnum = 2 then - value end) as diff
from (select e.*,
row_number() over (partition by e.event_type order by e.time desc) as seqnum
from events e
) e
group by e.event_type
having count(*) >= 2;

Related

How to get increment number when there are any change in a column in Bigquery?

I have data date, id, and flag on this table. How I can get the value column where this column is incremental number and reset from 1 when there are any change in flag column?
Consider below approach
select * except(changed, grp),
row_number() over(partition by id, grp order by date) value
from (
select *, countif(changed) over(partition by id order by date) grp
from (
select *,
ifnull(flag != lag(flag) over(partition by id order by date), true) changed
from `project.dataset.table`
))
if applied to sample data in your question - output is
You seem to want to count the number of falses since the last true. You can use:
select t.* except (grp),
(case when flag
then 1
else row_number() over (partition by id, grp order by date) - 1
end)
from (select t.*,
countif(flag) over (partition by id order by date) as grp
from t
) t;
If you know that the dates have no gaps, you can actually do this without a subquery:
select t.*,
(case when flag then 1
else date_diff(date,
max(case when flag then date end) over (partition by id),
day)
end)
from t;

MariaDB get first and last record of the month - nested query

I am using MariaDB and I have these kind of data:
I have also data for March and I am using this query to select distinct Months from the database:
select distinct(DATE_FORMAT(DT,'%m-%Y')) AS singleMonth FROM myTable
I want to be able to select FIRST and LAST record of P2 column for every month. How it is possible using the query above for getting all distinct months and also getting first record for the month and last?
Example what the query should return look-like:
You can use window functions and conditional aggregation:
select year(dt), month(dt),
min(case when seqnumn_asc = 1 then p2 end) as first_p2,
min(case when seqnumn_desc = 1 then p2 end) as last_p2
from (select t.*,
row_number() over (partition by year(dt), month(dt) order by dt asc) as seqnum_asc,
row_number() over (partition by year(dt), month(dt) order by dt desc) as seqnum_desc
from t
) t
group by year(dt), month(dt);

Get the Date where the min max value is at when querying with between

Is there a way to get the day when the Min and Max value occurred when querying between certain dates.
Lets say i'm querying between 1st of the month to 31st and i want to see on which day the min and max value for that given column occured.
You can use window functions and aggregation:
select max(case when seqnum_asc = 1 then col end) as min_val_date,
max(case when seqnum_desc = 1 then col end) as max_val_date
from (select t.*,
row_number() over (order by col asc) as seqnum_asc,
row_number() over (order by col desc) as seqnum_desc
from t
where datecol >= ? and datecol < ?
) t;
Or you can just use aggregation:
select (array_agg(datecol order by col desc limit 1))[ordinal(1)] as max_val_date,
(array_agg(datecol order by col asc limit 1))[ordinal(1)] as min_val_date
from t;
Consider below (if you care of performance)
select date_of_min, date_of_max
from (
select date_col as date_of_min from `project.dataset.table`
where date_col between min_date and max_date
order by value_col limit 1
), (
select date_col as date_of_max from `project.dataset.table`
where date_col between min_date and max_date
order by value_col desc limit 1
)

Difference between last and second last event in a table of events

I have the following table
which created by
create table events (
event_type integer not null,
value integer not null,
time timestamp not null,
unique (event_type, time)
);
given the data in the pic, I want to write a query that for each event_type that has been
registered more than once returns the difference between the latest and
the second latest value.
Given the above data, the output should be like
event_type value
2 -5
3 4
I solved it using the following :
CREATE VIEW [max_date] AS
SELECT event_type, max(time) as time, value
FROM events
group by event_type
having count(event_type) >1
order by time desc;
select event_type, value
from
(
select event_type, value, max(time)
from(
Select E1.event_type, ([max_date].value - E1.value) as value, E1.time
From events E1, [max_date]
Where [max_date].event_type = E1.event_type
and [max_date].time > E1.time
)
group by event_type
)
but this seems like a very complicated query and I wonder if there is an easier way?
Use window functions:
select e.*,
(value - prev_value)
from (select e.*,
lag(value) over (partition by event_type order by time) as prev_value,
row_number() over (partition by event_type order by time desc) as seqnum
from events e
) e
where seqnum = 1 and prev_value is not null;
You could use lag() and row_number()
select event_type, val
from (
select
event_type,
value - lag(value) over(partition by event_type order by time desc) val,
row_number() over(partition by event_type order by time desc) rn
from events
) t
where rn = 1 and val is not null
The inner query ranks records having the same event_type by descending time, and computes the difference between each value and the previous one.
Then, the outer query just filters on the top record per group.
Here is a way to do this using a combination of analytic functions and aggregation. This approach is friendly in the event that your database does not support LEAD and LAG.
WITH cte AS (
SELECT *, ROW_NUMBER() OVER (PARTITION BY event_type ORDER BY time DESC)
FROM events
)
SELECT
event_type,
MAX(CASE WHEN rn = 1 THEN value END) - MAX(CASE WHEN rn = 2 THEN value END) AS value
FROM cte
GROUP BY
event_type
HAVING
COUNT(*) > 1;

count consecutive record with timestamp interval requirement

ref to this post: link, I used the answer provided by #Gordon Linoff:
select taxi, count(*)
from (select t.taxi, t.client, count(*) as num_times
from (select t.*,
row_number() over (partition by taxi order by time) as seqnum,
row_number() over (partition by taxi, client order by time) as seqnum_c
from t
) t
group by t.taxi, t.client, (seqnum - seqnum_c)
having count(*) >= 2
)
group by taxi;
and got my answer perfectly like this:
Tom 3 (AA count as 1, AAA count as 1 and BB count as 1, so total of 3 count)
Bob 1
But now I would like to add one more condition which is the time between two consecutive clients for same taxi should not be longer than 2hrs.
I know that I should probably use row_number() again and calculate the time difference with datediff. But I have no idea where to add and how to do.
So any suggestion?
This requires a bit more logic. In this case, I would use lag() to calculate the groups:
select taxi, count(*)
from (select t.taxi, t.client, count(*) as num_times
from (select t.*,
sum(case when prev_client = client and
prev_time > time - interval '2 hour'
then 1
else 0
end) over (partition by client order by time) as grp
from (select t.*,
lag(client) over (partition by taxi order by time) as prev_client,
lag(time) over (partition by taxi order by time) as prev_time
from t
) t
) t
group by t.taxi, t.client, grp
having count(*) >= 2
)
group by taxi;
Note: You don't specify the database, so this uses ISO/ANSI standard syntax for date/time comparisons. You can adjust this for your actual database.