Sum up following rows with same status - sql

I am trying to sum up following rows if they have the same id and status.
The DB is running on a Windows Server 2016 and is a Microsoft SQL Server 14.
I was thinking about using a self join, but that would only sum up 2 rows, or somehow use lead/lag.
Here is how the table looks like (Duration is the days between this row and the next, sorted by mod_Date, if they have the same id):
+-----+--------------+-------------------------+----------+
| ID | Status | mod_Date | Duration |
+-----+--------------+-------------------------+----------+
| 1 | In Inventory | 2015-04-10 09:11:37.000 | 12 |
| 1 | Deployed | 2015-04-22 10:13:35.000 | 354 |
| 1 | Deployed | 2016-04-10 09:11:37.000 | 30 |
| 1 | In Inventory | 2016-05-10 09:11:37.000 | Null |
| 2 | In Inventory | 2013-04-10 09:11:37.000 | 12 |
| ... | ... | ... | ... |
+-----+--------------+-------------------------+----------+
There can be several rows with the same status and id following each other not only two.
And what I want to get is:
+-----+--------------+-------------------------+----------+
| ID | Status | mod_Date | Duration |
+-----+--------------+-------------------------+----------+
| 1 | In Inventory | 2015-04-10 09:11:37.000 | 12 |
| 1 | Deployed | 2015-04-22 10:13:35.000 | 384 |
| 1 | In Inventory | 2016-05-10 09:11:37.000 | Null |
| 2 | In Inventory | 2013-04-10 09:11:37.000 | 12 |
| ... | ... | ... | ... |
+-----+--------------+-------------------------+----------+

This is an example of gaps and islands. In this case, I think the difference of row numbers suffices:
select id, status, max(mod_date) as mod_date, sum(duration) as duration
from (select t.*,
row_number() over (partition by id, status order by mod_date) as seqnum_is,
row_number() over (partition by id order by mod_date) as seqnum_i
from t
) t
group by id, status, seqnum_i - seqnum_is;
The trick here is that the difference of two increasing sequences identifies "islands" of where the values are the same. This is rather mysterious the first time you see it. But if you run the subquery, you'll probably quickly see how this works.

Related

Get row for each unique user based on highest column value

I have the following data
+--------+-----------+--------+
| UserId | Timestamp | Rating |
+--------+-----------+--------+
| 1 | 1 | 1202 |
| 2 | 1 | 1198 |
| 1 | 2 | 1204 |
| 2 | 2 | 1196 |
| 1 | 3 | 1206 |
| 2 | 3 | 1194 |
| 1 | 4 | 1198 |
| 2 | 4 | 1202 |
+--------+-----------+--------+
I am trying to find the distribution of each user's Rating, based on their latest row in the table (latest is determined by Timestamp). On the path to that, I am trying to get a list of user IDs and Ratings which would look like the following
+--------+--------+
| UserId | Rating |
+--------+--------+
| 1 | 1198 |
| 2 | 1202 |
+--------+--------+
Trying to get here, I sorted the list on UserId and Timestamp (desc) which gives the following.
+--------+-----------+--------+
| UserId | Timestamp | Rating |
+--------+-----------+--------+
| 1 | 4 | 1198 |
| 2 | 4 | 1202 |
| 1 | 3 | 1206 |
| 2 | 3 | 1194 |
| 1 | 2 | 1204 |
| 2 | 2 | 1196 |
| 1 | 1 | 1202 |
| 2 | 1 | 1198 |
+--------+-----------+--------+
So now I just need to take the top N rows, where N is the number of players. But, I can't do a LIMIT statement as that needs a constant expression, as I want to use count(id) as the input for LIMIT which doesn't seem to work.
Any suggestions on how I can get the data I need?
Cheers!
Andy
This should work:
SELECT test.UserId, Rating FROM test
JOIN
(select UserId, MAX(Timestamp) Timestamp FROM test GROUP BY UserId) m
ON test.UserId = m.UserId AND test.Timestamp = m.Timestamp
If you can use WINDOW FUNCTIONS then you can use the following:
SELECT UserId, Rating FROM(
SELECT UserId, Rating, ROW_NUMBER() OVER (PARTITION BY UserId ORDER BY Timestamp DESC) row_num FROM test
)m WHERE row_num = 1

Calculate moving average of 90 days in bigquery

I am new to bigquery and trying to calculate moving average over 90 days for a sample data.
The sample data looks like below :
+---------------+-------------+----------+--------------------+-----------+-----------+-------------+
| incident_id | inc start | inc description| element_name| uid | is_repeated |
+---------------+-------------+-----------------+-------------|-----------------------+-------------+
| 1 | 1/5/2022 | server down | vm-001 | vm-001_server_down | No |
| 2 | 1/5/2022 | server down | vm-001 | vm-001_server_down | No |
| 3 | 2/5/2022 | firewall issue | vm-002 | vm-002_firewall_issue| No |
| 4 | 3/5/2022 | firewall issue | vm-003 | vm-003_firewall_issue| No |
| 5 | 1/6/2022 | server down | vm-001 | vm-001_server_down | Yes |
| 6 | 1/6/2022 | server down | vm-001 | vm-001_server_down | Yes |
| 7 | 2/6/2022 | server unreach | vm-003 | vm-003_server_unreach| No |
| 8 | 19/11/2022 | server down | vm-001 | vm-001_server_down | No |
---------------+-------------+----------+------------+-------|-----------+------------+-------------+
if inc description and uid occurs more than twice it should be represent "yes" in ISREPEATED column within 90 days.
What is the fastest way to acheive this using SQL ?
Condisier below query:
aggregate incident days over 90 days for each uid first,
and then remove duplicate days and count unique days of incidents.
WITH incidents AS (
SELECT *,
ARRAY_AGG(start) OVER (
PARTITION BY uid
ORDER BY UNIX_DATE(PARSE_DATE('%e/%m/%Y', start))
RANGE BETWEEN 89 PRECEDING AND CURRENT ROW
) AS inc_days
FROM sample_table
)
SELECT * EXCEPT(inc_days),
IF ((SELECT COUNT(DISTINCT day) FROM UNNEST(inc_days) day) > 1, 'YES', 'NO') AS is_repeated
FROM incidents
ORDER BY id;

SQL - Calculate number of occurrences of previous day?

I want to calculate the number of people who also had occurrence the previous day on a daily basis, but I'm not sure how to do this?
Sample Table:
| ID | Date |
+----+-----------+
| 1 | 1/10/2020 |
| 1 | 1/11/2020 |
| 2 | 2/20/2020 |
| 3 | 2/20/2020 |
| 3 | 2/21/2020 |
| 4 | 2/23/2020 |
| 4 | 2/24/2020 |
| 5 | 2/22/2020 |
| 5 | 2/23/2020 |
| 5 | 2/24/2020 |
+----+-----------+
Desired Output:
| Date | Count |
+-----------+-------+
| 1/11/2020 | 1 |
| 2/21/2020 | 1 |
| 2/23/2020 | 1 |
| 2/24/2020 | 2 |
+-----------+-------+
Edit: Added desired output. The output count should be unique to the ID, not the number of date occurrences. i.e. an ID 5 can appear on this list 10 times for dates 2/23/2020 and 2/24/2020, but that would count as "1".
Use lag():
select date, count(*)
from (select t.*, lag(date) over (partition by id order by date) as prev_date
from t
) t
where prev_date = dateadd(day, -1, date)
group by date;

Aggregation by positive/negative values v.2

I've posted several topics and every query had some problems :( Changed table and examples for better understanding
I have a table called PROD_COST with 5 fields
(ID,Duration,Cost,COST_NEXT,COST_CHANGE).
I need extra field called "groups" for aggregation.
Duration = number of days the price is valid (1 day=1row).
Cost = product price in this day.
-Cost_next = lead(cost,1,0).
Cost_change = Cost_next - Cost.
example:
+----+---------+------+-------------+-------+
|ID |Duration | Cost | Cost_change | Groups|
+----+---------+------+-------------+-------+
| 1 | 1 | 10 | -1,5 | 1 |
| 2 | 1 | 8,5 | 3,7 | 2 |
| 3 | 1 | 12.2 | 0 | 2 |
| 4 | 1 | 12.2 | -2,2 | 3 |
| 5 | 1 | 10 | 0 | 3 |
| 6 | 1 | 10 | 3.2 | 4 |
| 7 | 1 | 13.2 | -2,7 | 5 |
| 8 | 1 | 10.5 | -1,5 | 5 |
| 9 | 1 | 9 | 0 | 5 |
| 10 | 1 | 9 | 0 | 5 |
| 11 | 1 | 9 | -1 | 5 |
| 12 | 1 | 8 | 1.5 | 6 |
+----+---------+------+-------------+-------+
Now i need to group("Groups" field) by Cost_change. It can be positive,negative or 0 values.
Some kind guy advised me this query:
select id, COST_CHANGE, sum(GRP) over (order by id asc) +1
from
(
select *, case when sign(COST_CHANGE) != sign(isnull(lag(COST_CHANGE)
over (order by id asc),COST_CHANGE)) and Cost_change!=0 then 1 else 0 end as GRP
from PROD_COST
) X
But there is a problem: If there are 0 values between two positive or negative values than it groups it separately, for example:
+-------------+--------+
| Cost_change | Groups |
+-------------+--------+
| 9.262 | 5777 |
| -9.262 | 5778 |
| 9.262 | 5779 |
| 0.000 | 5779 |
| 9.608 | 5780 |
| -11.231 | 5781 |
| 10.000 | 5782 |
+-------------+--------+
I need to have:
+-------------+--------+
| Cost_change | Groups |
+-------------+--------+
| 9.262 | 5777 |
| -9.262 | 5778 |
| 9.262 | 5779 |
| 0.000 | 5779 |
| 9.608 | 5779 | -- Here
| -11.231 | 5780 |
| 10.000 | 5781 |
+-------------+--------+
In other words, if there's 0 values between two positive ot two negative values than they should be in one group, because Sequence: MINUS-0-0-MINUS - no rotation. But if i had MINUS-0-0-PLUS, than GROUPS should be 1-1-1-2, because positive valus is rotating with negative value.
Thank you for attention!
I'm Using Sql Server 2012
I think the best approach is to remove the zeros, do the calculation, and then re-insert them. So:
with pcg as (
select pc.*, min(id) over (partition by grp) as grpid
from (select pc.*,
(row_number() over (order by id) -
row_number() over (partition by sign(cost_change)
order by id
) as grp
from prod_cost pc
where cost_change <> 0
) pc
)
select pc.*, max(groups) over (order by id)
from prod_cost pc left join
(select pcg.*, dense_rank() over (order by grpid) as groups
from pcg
) pc
on pc.id = pcg.id;
The CTE assigns a group identifier based on the lowest id in the group, where the groups are bounded by actual sign changes. The subquery turns this into a number. The outer query then accumulates the maximum value, to give a value to the 0 records.

How to partition by a customized sum value?

I have a table with the following columns: customer_id, event_date_time
I'd like to figure out how many times a customer triggers an event every 12 hours from the start of an event. In other words, aggregate the time between events for up to 12 hours by customer.
For example, if a customer triggers an event (in order) at noon, 1:30pm, 5pm, 2am, and 3pm, I would want to return the noon, 2am, and 3pm record.
I've written this query:
select
cust_id,
event_datetime,
nvl(24*(event_datetime - lag(event_datetime) over (partition BY cust_id ORDER BY event_datetime)),0) as difference
from
tbl
I feel like I'm close with this. Is there a way to add something like
over (partition BY cust_id, sum(difference)<12 ORDER BY event_datetime)
EDIT: I'm adding some sample data:
+---------+-----------------+-------------+---+
| cust_id | event_datetime | DIFFERENCE | X |
+---------+-----------------+-------------+---+
| 1 | 6/20/2015 23:35 | 0 | x |
| 1 | 6/21/2015 0:09 | 0.558611111 | |
| 1 | 6/21/2015 0:49 | 0.667777778 | |
| 1 | 6/21/2015 1:30 | 0.688333333 | |
| 1 | 6/21/2015 9:38 | 8.133055556 | |
| 1 | 6/21/2015 10:09 | 0.511111111 | |
| 1 | 6/21/2015 10:45 | 0.600555556 | |
| 1 | 6/21/2015 11:09 | 0.411111111 | |
| 1 | 6/21/2015 11:32 | 0.381666667 | |
| 1 | 6/21/2015 11:55 | 0.385 | x |
| 1 | 6/21/2015 12:18 | 0.383055556 | |
| 1 | 6/21/2015 12:23 | 0.074444444 | |
| 1 | 6/22/2015 10:01 | 21.63527778 | x |
| 1 | 6/22/2015 10:24 | 0.380555556 | |
| 1 | 6/22/2015 10:46 | 0.373611111 | |
+---------+-----------------+-------------+---+
The "x" are the records that should be pulled since they're the first records in the 12 hour block.
If I understand correctly, you want the first record in each 12-hour block where the blocks of time are defined by the first event time.
If so, you need to modify your query to get the difference from the *first * time for each customer. The rest is just arithmetic. The query would look something like this:
with t as (
select cust_id, event_datetime,
(24 * (event_datetime -
coalesce(min(event_datetime) over (partition by cust_id ), 0)
) as difference
from tbl
)
select t.*
from (select t.*,
row_number() over (partition by cust_id, floor(difference / 12)
order by difference) as seqnum
from t
) t
where seqnum = 1;