Calculating elapsed time in non-contiguous rows using SQL - sql

I need to deduce uptime for servers using SQL with a table that looks as follows:
| Row | ID | Status | Timestamp |
-----------------------------------
| 1 | A1 | UP | 1598451078 |
-----------------------------------
| 2 | A2 | UP | 1598457488 |
-----------------------------------
| 3 | A3 | UP | 1598457489 |
-----------------------------------
| 4 | A1 | DOWN | 1598458076 |
-----------------------------------
| 5 | A3 | DOWN | 1598461096 |
-----------------------------------
| 6 | A1 | UP | 1598466510 |
-----------------------------------
In this example, A1 went down on Wed, 26 Aug 2020 16:07:56 and came back up at Wed, 26 Aug 2020 18:28:30. This means I need to find the difference between rows 6 and 4 using the ID field and display it as an additional column named "Uptime".
I have found several answers that explain how to use aliases and inner joins to calculate the difference between contiguous rows (e.g. How to get difference between two rows for a column field?), but none that explains how to do so for non-contiguous rows.
For example, this piece of code from https://www.mysqltutorial.org/mysql-tips/mysql-compare-calculate-difference-successive-rows/ gives a possible solution, but I don't know how to adapt it to compare the roaws based on the ID field:
SELECT
g1.item_no,
g1.counted_date from_date,
g2.counted_date to_date,
(g2.qty - g1.qty) AS receipt_qty
FROM
inventory g1
INNER JOIN
inventory g2 ON g2.id = g1.id + 1
WHERE
g1.item_no = 'A';
Any help would be much appreciated.

Basically, you need the total time minus the downtime.
If you want the different periods, you can use:
select status, max(timestamp), min(timestamp),
max(timestamp) - min(timestamp)
from (select t.*,
row_number() over (order by timestamp) as seqnum,
row_number() over (partition by status order by timestamp) as seqnum2
from t
) t
group by status, (seqnum - seqnum2);
However, for your purposes, for the total uptime:
select sum( coalesce(next_timestamp, max_uptimestamp) - min(timestamp))
from (select t.*,
lag(timestamp) over (order by status) as prev_status,
lead(timestamp) over (order by timestamp) as next_timestamp,
max(case when status = 'UP' then timestamp end) over () as max_uptimestamp
from t
) t
where status = 'UP' and
(prev_status = 'DOWN' or pre_status is null);
Basically, this counts all the time from the first UP to the next DOWN or to the last UP. It then sums that up.

Related

How to create BigQuery this query in retail dataset

I have a table with user retail transactions. It includes sales and cancels. If Qty is positive - it sells, if negative - cancels. I want to attach cancels to the most appropriate sell. So, I have tables likes that:
| CustomerId | StockId | Qty | Date |
|--------------+-----------+-------+------------|
| 1 | 100 | 50 | 2020-01-01 |
| 1 | 100 | -10 | 2020-01-10 |
| 1 | 100 | 60 | 2020-02-10 |
| 1 | 100 | -20 | 2020-02-10 |
| 1 | 100 | 200 | 2020-03-01 |
| 1 | 100 | 10 | 2020-03-05 |
| 1 | 100 | -90 | 2020-03-10 |
User with ID 1 has the following actions: buy 50 -> return 10 -> buy 60 -> return 20 -> buy 200 -> buy 10 - return 90. For each cancel row (with negative Qty) I find the previous row (by Date) with positive Qty and greater than cancel Qty.
So I need to create BigQuery queries to create table likes this:
| CustomerId | StockId | Qty | Date | CancelQty |
|--------------+-----------+-------+------------+-------------|
| 1 | 100 | 50 | 2020-01-01 | -10 |
| 1 | 100 | 60 | 2020-02-10 | -20 |
| 1 | 100 | 200 | 2020-03-01 | -90 |
| 1 | 100 | 10 | 2020-03-05 | 0 |
Does anybody help me with these queries? I have created one candidate query (split cancel and sales, join them, and do some staff for removing), but it works incorrectly in the above case.
I use BigQuery, so any BQ SQL features could be applied.
Any ideas will be helpful.
You can use the following query.
;WITH result AS (
select t1.*,t2.Qty as cQty,t2.Date as Date_t2 from
(select *,ROW_NUMBER() OVER (ORDER BY qty DESC) AS [ROW NUMBER] from Test) t1
join
(select *,ROW_NUMBER() OVER (ORDER BY qty) AS [ROW NUMBER] from Test) t2
on t1.[ROW NUMBER] = t2.[ROW NUMBER]
)
select CustomerId,StockId,Qty,Date,ISNULL(cQty, 0) As CancelQty,Date_t2
from (select CustomerId,StockId,Qty,Date,case
when cQty < 0 then cQty
else NULL
end AS cQty,
case
when cQty < 0 then Date_t2
else NULL
end AS Date_t2 from result) t
where qty > 0
order by cQty desc
result: https://dbfiddle.uk
You can do this as a gaps-and-islands problem. Basically, add a grouping column to the rows based on a cumulative reverse count of negative values. Then within each group, choose the first row where the sum is positive. So:
select t.* (except cancelqty, grp),
(case when min(case when cancelqty + qty >= 0 then date end) over (partition by customerid grp) = date
then cancelqty
else 0
end) as cancelqty
from (select t.*,
min(cancelqty) over (partition by customerid, grp) as cancelqty
from (select t.*,
countif(qty < 0) over (partition by customerid order by date desc) as grp
from transactions t
) t
from t
) t;
Note: This works for the data you have provided. However, there may be complicated scenarios where this does not work. In fact, I don't think there is a simple optimal solution assuming that the returns are not connected to the original sales. I would suggest that you fix the data model so you record where the returns come from.
The below query seems to satisfy the conditions and the output mentioned.The solution is based on mapping the base table (t) and having the corresponding canceled qty row alongside from same table(t1)
First, a self join based on the customer and StockId is done since they need to correspond to the same customer and product.
Additionally, we are bringing in the canceled transactions t1 that happened after the base row in table t t.Dt<=t1.Dt and to ensure this is a negative qty t1.Qty<0 clause is added
Further we cannot attribute the canceled qty if they are less than the Original Qty. Therefore I am checking if the positive is greater than the canceled qty. This is done by adding a '-' sign to the cancel qty so that they can be compared easily. -(t1.Qty)<=t.Qty
After the Join, we are interested only in the positive qty, so adding a where clause to filter the other rows from the base table t with canceled quantities t.Qty>0.
Now we have the table joined to every other canceled qty row which is less than the transaction date. For example, the Qty 50 can have all the canceled qty mapped to it but we are interested only in the immediate one came after. So we first group all the base quantity values and then choose the date of the canceled Qty that came in first in the Having clause condition HAVING IFNULL(t1.dt, '0')=MIN(IFNULL(t1.dt, '0'))
Finally we get the rows we need and we can exclude the last column if required using an outer select query
SELECT t.CustomerId,t.StockId,t.Qty,t.Dt,IFNULL(t1.Qty, 0) CancelQty
,t1.dt dt_t1
FROM tbl t
LEFT JOIN tbl t1 ON t.CustomerId=t1.CustomerId AND
t.StockId=t1.StockId
AND t.Dt<=t1.Dt AND t1.Qty<0 AND -(t1.Qty)<=t.Qty
WHERE t.Qty>0
GROUP BY 1,2,3,4
HAVING IFNULL(t1.dt, '0')=MIN(IFNULL(t1.dt, '0'))
ORDER BY 1,2,4,3
fiddle
Consider below approach
with sales as (
select * from `project.dataset.table` where Qty > 0
), cancels as (
select * from `project.dataset.table` where Qty < 0
)
select any_value(s).*,
ifnull(array_agg(c.Qty order by c.Date limit 1)[offset(0)], 0) as CancelQty
from sales s
left join cancels c
on s.CustomerId = c.CustomerId
and s.StockId = c.StockId
and s.Date <= c.Date
and s.Qty > abs(c.Qty)
group by format('%t', s)
if applied to sample data in your question - output is

Counting current items by month

I'm trying to build a monthly tally of active equipment, grouped by service area from a database log table. I think I'm 90% of the way there; I have a list of months, along with the total number of items that existed, and grouped by region.
However, I also need to know the state of each item as they were on the first of each month, and this is the part I'm stuck on. For instance, Item 1 is in region A in January, but moves to Region B in February. Item 2 is marked as 'inactive' in February, so shouldn't be counted. My existing query will always count item 1 in region A, and item 2 as 'active'.
I can correctly show that Item 3 is deleted in March, and Item 4 doesn't show up until the April count. I realize that I'm getting the first values because my query is specifying the min date, I'm just not sure how I need to change it to get what I want.
I think I'm looking for a way to group by Max(OperationDate) for each Month.
The Table looks like this:
| EQUIPID | EQUIPNAME | EQUIPACTIVE | DISTRICT | REGION | OPERATIONDATE | OPERATION |
|---------|-----------|-------------|----------|--------|----------------------|-----------|
| 1 | Item 1 | 1 | 1 | A | 2015-01-01T00:00:00Z | INS |
| 2 | Item 2 | 1 | 1 | A | 2015-01-01T00:00:00Z | INS |
| 3 | Item 3 | 1 | 1 | A | 2015-01-01T00:00:00Z | INS |
| 2 | Item 2 | 0 | 1 | A | 2015-02-10T00:00:00Z | UPD |
| 1 | Item 1 | 1 | 1 | B | 2015-02-15T00:00:00Z | UPD |
| 3 | (null) | (null) | (null) | (null) | 2015-02-21T00:00:00Z | DEL |
| 1 | Item 1 | 1 | 1 | A | 2015-03-01T00:00:00Z | UPD |
| 4 | Item 4 | 1 | 1 | B | 2015-03-10T00:00:00Z | INS |
There is also a subtable that holds attributes that I care about. It's structure is similar. Unfortunately, due to previous design decisions, there is no correlation to operations between the two tables. Any joins will need to be done using the EquipmentID, and have the overlapping states matched up for each date.
Current query:
--cte to build date list
WITH calendar (dt) AS
(SELECT &fromdate from dual
UNION ALL
SELECT Add_Months(dt,1)
FROM calendar
WHERE dt < &todate)
SELECT dt, a.district, a.region, count(*)
FROM
(SELECT EQUIPID, DISTRICT, REGION, OPERATION, MIN(OPERATIONDATE ) AS FirstOp, deleted.deldate
FROM Equipment_Log
LEFT JOIN
(SELECT EQUIPID,MAX(OPERATIONDATE) as DelDate
FROM Equipment_Log
WHERE OPERATION = 'DEL'
GROUP BY EQUIPID
) Deleted
ON Equipment_Log.EQUIPID = Deleted.EQUIPID
WHERE OPERATION <> 'DEL' --AND additional unimportant filters
GROUP BY EQUIPID,DISTRICT, REGION , OPERATION, deldate
) a
INNER JOIN calendar
ON (calendar.dt >= FirstOp AND calendar.dt < deldate)
OR (calendar.dt >= FirstOp AND deldate is null)
LEFT JOIN
( SELECT EQUIPID, MAX(OPERATIONDATE) as latestop
FROM SpecialEquip_Table_Log
--where SpecialEquip filters
group by EQUIPID
) SpecialEquip
ON a.EQUIPID = SpecialEquip.EQUIPID and calendar.dt >= SpecialEquip.latestop
GROUP BY dt, district, region
ORDER BY dt, district, region
Take only last operation for each id. This is what row_number() and where rn = 1 do.
We have calendar and data. Make partitioned join.
I assumed that you need to fill values for months where entries for id are missing. So nvl(lag() ignore nulls) are needed, because if something appeared in January it still exists in Feb, March and we need district, region values from last not empty row.
Now you have everything to make count. That part where you mentioned SpecialEquip_Table_Log is up to you, because you left-joined this table and not used it later, so what is it for? Join if you need it, you have id.
db<>fiddle
with
calendar(mth) as (
select date '2015-01-01' from dual union all
select add_months(mth, 1) from calendar where mth < date '2015-05-01'),
data as (
select id, dis, reg, dt, op, act
from (
select equipid id, district dis, region reg,
to_char(operationdate, 'yyyy-mm') dt,
row_number()
over (partition by equipid, trunc(operationdate, 'month')
order by operationdate desc) rn,
operation op, nvl(equipactive, 0) act
from t)
where rn = 1 )
select mth, dis, reg, sum(act) cnt
from (
select id, mth,
nvl(dis, lag(dis) ignore nulls over (partition by id order by mth)) dis,
nvl(reg, lag(reg) ignore nulls over (partition by id order by mth)) reg,
nvl(act, lag(act) ignore nulls over (partition by id order by mth)) act
from calendar
left join data partition by (id) on dt = to_char(mth, 'yyyy-mm') )
group by mth, dis, reg
having sum(act) > 0
order by mth, dis, reg
It may seem complicated, so please run subqueries separately at first to see what is going on. And test :) Hope this helps.

Return Min Start Date, Max End Date and Latest Category for a group of consecutive records based on date

I have a table which contains a Person ID, Category_ID, Start Date, End Date and Category. When the Start Date is the same as the Previous End Date then this is a continuation and merely denotes a Category Change. There can be many Category changes within a continuous date period.
I want to return the First Start Date and Last End Date and Category Type for each person.
I thought about identifying all those that have continuous date period for a person and return max and min etc. But this doesn't take into account when a person has multiple continuous date periods, i.e. one period ends and there is a break and then there is another continuous period with category changes.
Example output:
+---------+------------+------------+---------------+
| ID | start_dt | end_dt | category_type |
+---------+------------+------------+---------------+
| 8105755 | 26/01/2016 | 21/04/2016 | D |
| 8105859 | 21/04/2016 | 22/04/2016 | A |
| 8105861 | 22/04/2016 | 26/04/2016 | D |
| 8105870 | 26/04/2016 | 19/10/2016 | A |
+---------+------------+------------+---------------+
So in this case as the end_dt is the same as the preceding start_dt for each row this is a continuous period so I want to return one row with the First Start Date and Last End Date and Latest Category Type, as below:
+---------+------------+------------+---------------+
| ID | start_dt | end_dt | category_type |
+---------+------------+------------+---------------+
| 8105870 | 26/01/2016 | 19/10/2016 | A |
+---------+------------+------------+---------------+
This is a type of gaps-and-islands problem, which you can solve using a cumulative sum to identify the groups. The sum is based on when groups start. So:
select distinct
first_value(t.id) over (partition by grp order by t.start_dt desc) as id,
min(t.start_dt) over (partition by grp) as start_dt,
max(t.start_dt) over (partition by grp) as end_dt,
first_value(t.category) over (partition by grp order by t.start_dt desc) as id
from (select t.*,
sum(case when t.id is null then 1 else 0 end) over (order by t.start_dt) as grp
from t left join
t tprev
on tprev.end_dt = t.start_dt
) t;
Note: This uses select distinct simply because SQL Server does not offer "first()"/"last()" functions for aggregation.

in SQL, how to remove distinct column values (not rows, as usually done)

I have a production case, for a supply chain. We have devices that are moved around in warehouses, and I need to find the previous warehouse locations.
I have a table like this:
+--------+------------+--------+--------+--------+
| device | current_WH | prev_1 | prev_2 | prev_3 |
+--------+------------+--------+--------+--------+
| 1 | AB | KK | KK | KK |
| 2 | DE | DE | DE | NQ |
| 3 | FF | MM | ST | ST |
+--------+------------+--------+--------+--------+
I need to find the distinct values of current_WH and the "prev" columns. So I'm not flattening rows, but narrowing columns. I need to get this:
+--------+------------+--------+--------+--------+
| device | current_WH | prev_1 | prev_2 | prev_3 |
+--------+------------+--------+--------+--------+
| 1 | AB | KK | blank | blank |
| 2 | DE | NQ | blank | blank |
| 3 | FF | MM | ST | blank |
+--------+------------+--------+--------+--------+
I'll figure out nulls or blanks later. But for now I need one row for each device that shows the current WH and previous locations. There could be any number - not always the same.
If I do "distinct" that flattens rows. Doing a distinct and group by doesn't achieve the requirement.
Any help is appreciated. Thanks!
You need to do unpivot to let your column value rows, because that will easier to compare before current_WH value data, then do a pivot to recover the data schema.
Do unpivot to let your column value rows, because that will easier to compare before current_WH value data, and add a new grp column it can help to recover your expected result.
use LAG function to get the previous value it will be compared with current_WH value.
use SUM with CASE WHEN and window function to cumulative number if the previous equal to current_WH value.
if the SUM cumulative number greater than 0 means the name was repeated.
look like this.
with cteUnion as(
SELECT device,current_WH,0 grp
FROM T
UNION ALL
SELECT device,prev_1,1 grp
FROM T
UNION ALL
SELECT device,prev_2,2 grp
FROM T
UNION ALL
SELECT device,prev_3,3 grp
FROM T
),cte1 as(
SELECT *,
LAG(current_WH) over(partition by current_WH order by grp) perviosVal
from cteUnion
),cteResult as (
SELECT *,
(CASE WHEN sum(CASE WHEN perviosVal = current_WH then 1 else 0 end) over(partition by device order by grp) > 0 THEN 'Block' else current_WH end) val
FROM cte1
)
select device,
MAX(CASE WHEN grp = 0 then val end) current_WH ,
MAX(CASE WHEN grp = 1 then val end) prev_1,
MAX(CASE WHEN grp = 2 then val end) prev_2,
MAX(CASE WHEN grp = 3 then val end) prev_3
from cteResult
GROUP BY device
sqlfiddle
NOTE
grp column number value depends on your order.

Filter table : Keep N row after each row with special value

I have a table with a huge amount of data with this structure (simplidied) :
+--------+-------------------------+-------+
| id | datetime | type |
+--------+-------------------------+-------+
| 1 | 2015-08-13 17:50:41 | 1 |
| 2 | 2015-08-13 17:50:45 | 0 |
| 3 | 2015-08-14 17:50:56 | 0 |
| 4 | 2015-08-14 17:50:59 | 0 |
+--------+-------------------------+-------+
Row with type=1 are followed by a lots of rows with type=0
I need to do an intelligent clean :
I want to keep rows with type=0 following rows with type=1 only during one hour (After the type 1 row timestamp)
And at least one row with type=0 per hour
I don't know if its possible to do that with a query, or if I will have to loop through all rows with a script.
I use PostgreSQL
I dont have postgres here to test, but this should return all of the data you want to keep:
SELECT ID FROM (
SELECT ID FROM (SELECT
id,
datetime,
type,
LAG(type) OVER (ORDER BY id asc) AS prev_type,
LAG(datetime) OVER (ORDER BY id asc) AS prev_date
FROM employees
WHERE
type=1 AND
prev_type=0 AND
EXTRACT(EPOCH FROM (datetime - prev_date)) < 3601
)
UNION
SELECT MAX(ID) FROM employees GROUP BY TO_CHAR(datetime, 'DDMMYYYHH24'))