how to write this query in more optimize way? - sql

When I run this Update SQL statement it causes a load on the system query taking a High Spool of 2.5TB. PJI is also high, I have already collected stats on it.
UPDATE PMP_CBS.RPT_BILLING_DETAIL_FINAL
FROM
(
SEL ACCT_ID,
Media_type_cd
FROM PMP_VEW_CBS.RPT_BILLING_DETAIL_FINAL A LEFT JOIN PMP_AVEW.FCT_SUBS_ACCT B -- PMP_AVEW.FCT_SUBS_ACCT .. This DB and Table doesnt exist
ON CAST(A.ACCT_ID AS VARCHAR(50))=CAST(B.BILLING_ACCT_ID AS VARCHAR(50))
WHERE BILLING_MONTH = ADD_MONTHS(CURRENT_DATE - EXTRACT(DAY
FROM CURRENT_DATE), 0 )(FORMAT 'YYYYMM')(CHAR(06))
AND Media_type_cd IS NOT NULL
QUALIFY ROW_NUMBER() OVER (PARTITION BY ACCT_ID
ORDER BY ACCT_ID)=1
GROUP BY 1,
2
) A
SET Media_type =A.Media_type_cd
WHERE PMP_VEW_CBS.RPT_BILLING_DETAIL_FINAL.ACCT_ID=A.ACCT_ID
AND PMP_VEW_CBS.RPT_BILLING_DETAIL_FINAL.BILLING_MONTH = ADD_MONTHS(CURRENT_DATE - EXTRACT(DAY
FROM CURRENT_DATE), 0 )(FORMAT 'YYYYMM')(CHAR(06));

Related

Calculate time span between two specific statuses on the database for each ID

I have a table on the database that contains statuses updated on each vehicle I have, I want to calculate how many days each vehicle spends time between two specific statuses 'Maintenance' and 'Read'.
My table looks something like this
and I want to result to be like this, only show the number of days a vehicle spends in maintenance before becoming ready on a specific day
The code I written looks like this
drop table if exists #temps1
select
VehicleId,
json_value(VehiclesHistoryStatusID.text,'$.en') as VehiclesHistoryStatus,
VehiclesHistory.CreationTime,
datediff(day, VehiclesHistory.CreationTime ,
lead(VehiclesHistory.CreationTime ) over (order by VehiclesHistory.CreationTime ) ) as days,
lag(json_value(VehiclesHistoryStatusID.text,'$.en')) over (order by VehiclesHistory.CreationTime) as PrevStatus,
case
when (lag(json_value(VehiclesHistoryStatusID.text,'$.en')) over (order by VehiclesHistory.CreationTime) <> json_value(VehiclesHistoryStatusID.text,'$.en')) THEN datediff(day, VehiclesHistory.CreationTime , (lag(VehiclesHistory.CreationTime ) over (order by VehiclesHistory.CreationTime ))) else 0 end as testing
into #temps1
from fleet.VehicleHistory VehiclesHistory
left join Fleet.Lookups as VehiclesHistoryStatusID on VehiclesHistoryStatusID.Id = VehiclesHistory.StatusId
where (year(VehiclesHistory.CreationTime) > 2021 and (VehiclesHistory.StatusId = 140 Or VehiclesHistory.StatusId = 144) )
group by VehiclesHistory.VehicleId ,VehiclesHistory.CreationTime , VehiclesHistoryStatusID.text
order by VehicleId desc
drop table if exists #temps2
select * into #temps2 from #temps1 where testing <> 0
select * from #temps2
Try this
SELECT innerQ.VehichleID,innerQ.CreationDate,innerQ.Status
,SUM(DATEDIFF(DAY,innerQ.PrevMaintenance,innerQ.CreationDate)) AS DayDuration
FROM
(
SELECT t1.VehichleID,t1.CreationDate,t1.Status,
(SELECT top(1) t2.CreationDate FROM dbo.Test t2
WHERE t1.VehichleID=t2.VehichleID
AND t2.CreationDate<t1.CreationDate
AND t2.Status='Maintenance'
ORDER BY t2.CreationDate Desc) AS PrevMaintenance
FROM
dbo.Test t1 WHERE t1.Status='Ready'
) innerQ
WHERE innerQ.PrevMaintenance IS NOT NULL
GROUP BY innerQ.VehichleID,innerQ.CreationDate,innerQ.Status
In this query first we are finding the most recent 'maintenance' date before each 'ready' date in the inner most query (if exists). Then calculate the time span with DATEDIFF and sum all this spans for each vehicle.

SQL Optimization: multiplication of two calculated field generated by window functions

Given two time-series tables tbl1(time, b_value) and tbl2(time, u_value).
https://www.db-fiddle.com/f/4qkFJZLkZ3BK2tgN4ycCsj/1
Suppose we want to find the last value of u_value in each day, the daily cumulative sum of b_value on that day, as well as their multiplication, i.e. daily_u_value * b_value_cum_sum.
The following query calculates the desired output:
WITH cte AS (
SELECT
t1.time,
t1.b_value,
t2.u_value * t1.b_value AS bu_value,
last_value(t2.u_value)
OVER
(PARTITION BY DATE_TRUNC('DAY', t1.time) ORDER BY DATE_TRUNC('DAY', t2.time) ) AS daily_u_value
FROM stackoverflow.tbl1 t1
LEFT JOIN stackoverflow.tbl2 t2
ON
t1.time = t2.time
)
SELECT
DATE_TRUNC('DAY', c.time) AS time,
AVG(c.daily_u_value) AS daily_u_value,
SUM( SUM(c.b_value)) OVER (ORDER BY DATE_TRUNC('DAY', c.time) ) as b_value_cum_sum,
AVG(c.daily_u_value) * SUM( SUM(c.b_value) ) OVER (ORDER BY DATE_TRUNC('DAY', c.time) ) as daily_u_value_mul_b_value
FROM cte c
GROUP BY 1
ORDER BY 1 DESC
I was wondering what I can do to optimize this query? Is there any alternative solution that generates the same result?
db filddle demo
from your query: Execution Time: 250.666 ms to my query Execution Time: 205.103 ms
seems there is some progress there. Mainly reduce the time of cast, since I saw your have many times cast from timestamptz to timestamp. I wonder why not just another date column.
I first execute my query then yours, which mean the compare condition is quite fair, since second time execute generally more faster than first time.
alter table tbl1 add column t1_date date;
alter table tbl2 add column t2_date date;
update tbl1 set t1_date = time::date;
update tbl2 set t2_date = time::date;
WITH cte AS (
SELECT
t1.t1_date,
t1.b_value,
t2.u_value * t1.b_value AS bu_value,
last_value(t2.u_value)
OVER
(PARTITION BY t1_date ORDER BY t2_date ) AS daily_u_value
FROM stackoverflow.tbl1 t1
LEFT JOIN stackoverflow.tbl2 t2
ON
t1.time = t2.time
)
SELECT
t1_date,
AVG(c.daily_u_value) AS daily_u_value,
SUM( SUM(c.b_value)) OVER (ORDER BY t1_date ) as b_value_cum_sum,
AVG(c.daily_u_value) * SUM( SUM(c.b_value) ) OVER
(ORDER BY t1_date ) as daily_u_value_mul_b_value
FROM cte c
GROUP BY 1
ORDER BY 1 DESC

SQL help Building Cases based on dates of service every 30day gap

I was wondering if someone could help me out with the following logic. I'm trying to create a new case number every 30 days a unique patient has a gap in services. Currently the case numbers aren't going past 2.
select t1.*,
dense_rank() over(
partition by t1.D_UNIQUEPATIENTID
order by t1.d_ServiceStartDate) episode_rank
into #temp2
from #temp1 t1
-- select * from #temp2
--This is every episode and the days between episodes.
select *,
isnull(abs(datediff(day,t1.d_ServiceStartDate, (select top 1 t2.[d_ServiceEndDate] from #temp2 as t2 where t2.D_UNIQUEPATIENTID =t1.D_UNIQUEPATIENTID
and t2.episode_rank < t1.episode_rank order by t2.episode_rank desc))),0) as day_count, 1 AS e2
into #temp3
from #temp2 as t1
SELECT *
,(CASE WHEN t.day_count > 30 THEN t.e2 + 1 ELSE t.e2 end)AS Case_Num
into #temp4
FROM #temp3 AS t
-- select * from #temp4
-- This should return the $amt per case, per member
select D_UNIQUEPATIENTID, Case_Num,min(d_ServiceStartDate) as 'Start_Date',
max(d_ServiceEndDate) as End_Date,Sum(Visits)as Visits,sum(allowed) as 'Allowed Total'
from #temp4
where [d_IncurredPD] between '201801' and'201906'
group by D_UNIQUEPATIENTID, Case_Num
I'm trying to create a new case number every 30 days a unique patient has a gap in services.
It is unclear whether you want this per patient or overall. I assume per patient, although the solution can be adapted.
I have no idea what how the code sample relates to this question. So, I'll write this generically:
select t.*,
sum(case when prev_servicedate >= servicedate - interval '30 day'
then 0 else 1
end) over (partition by patientid order by servicedate) as patient_case
from (select t.*,
lag(servicedate) over (partition by patientid order by servicedate) as prev_servicedate
from t
) t;
Note that this uses standard date functions; these usually vary by database but you haven't specified the database (as I write this).

Slow running query, Postgresql

I have a very slow query (30+ minutes or more) that I think can be sped up with more efficient coding. Below is the code and the query plan that results. So I am looking for answers to speed up with query that is performing several joins on large tables.
drop table if exists totalshad;
create temporary table totalshad as
select pricedate, hour, sum(cast(price as numeric)) as totalprice from
pjm.rtcons
where
rtcons.pricedate >= '2017-12-01'
-- and
-- rtcons.pricedate <= '2018-01-23'
group by pricedate, hour
order by pricedate, hour;
-----------------------------
drop table if exists percshad;
create temporary table percshad as
select totalshad.pricedate, totalshad.hour, facility, round(sum(cast(price
as numeric)),2) as cons_shad, round(sum(cast(totalprice as numeric)),2) as
total_shad, round(cast(price/totalprice as numeric),4) as per_shad from
totalshad
join pjm.rtcons on
rtcons.pricedate = totalshad.pricedate
and
rtcons.hour = totalshad.hour
and
facility = 'ETOWANDA-NMESHOPP ETL 1057 A 115 KV'
where totalprice <> 0 and totalshad.pricedate > '2017-12-01'
group by totalshad.pricedate, totalshad.hour, facility,
(price/totalprice)
order by per_shad desc
limit 5;
EXPLAIN select facility, percshad.pricedate, percshad.hour, per_shad,
minmcc.rtmcc, minnode.nodename, maxmcc.rtmcc, maxnode.nodename from percshad
join pjm.prices minmcc on
minmcc.pricedate = percshad.pricedate
and
minmcc.hour = percshad.hour
and
minmcc.rtmcc = (select min(rtmcc) from pjm.prices where pricedate =
percshad.pricedate and hour = percshad.hour)
join pjm.nodes minnode on
minnode.node_id = minmcc.node_id
join pjm.prices maxmcc on
maxmcc.pricedate = percshad.pricedate
and
maxmcc.hour = percshad.hour
and
maxmcc.rtmcc = (select max(rtmcc) from pjm.prices where pricedate =
percshad.pricedate and hour = percshad.hour)
join pjm.nodes maxnode on
maxnode.node_id = maxmcc.node_id
order by per_shad desc
limit 5
And here is the EXPLAIN output:
UPDATE: I have now simplified my code down to the following. But as can be seen from the EXPLAIN, it stills takes forever to find the node_id in the last select statement
drop table if exists totalshad;
create temporary table totalshad as
select pricedate, hour, sum(cast(price as numeric)) as totalprice from
pjm.rtcons
where
rtcons.pricedate >= '2017-12-01'
-- and
-- rtcons.pricedate <= '2018-01-23'
group by pricedate, hour
order by pricedate, hour;
-----------------------------
drop table if exists percshad;
create temporary table percshad as
select totalshad.pricedate, totalshad.hour, facility, round(sum(cast(price
as numeric)),2) as cons_shad, round(sum(cast(totalprice as numeric)),2) as
total_shad,
round(cast(price/totalprice as numeric),4) as per_shad from totalshad
join pjm.rtcons on
rtcons.pricedate = totalshad.pricedate
and
rtcons.hour = totalshad.hour
and
facility = 'ETOWANDA-NMESHOPP ETL 1057 A 115 KV'
where totalprice <> 0 and totalshad.pricedate > '2017-12-01'
group by totalshad.pricedate, totalshad.hour, facility, (price/totalprice)
order by per_shad desc
limit 5;
drop table if exists mincong;
create temporary table mincong as
select pricedate, hour, min(rtmcc) as rtmcc
from pjm.prices JOIN percshad USING (pricedate, hour)
group by pricedate, hour;
EXPLAIN select distinct on (pricedate, hour) prices.node_id from mincong
JOIN pjm.prices USING (pricedate, hour, rtmcc)
group by pricedate, hour, node_id
The problem are the subselects in the join condition; they have to be executed for every row joined.
If you cannot get rid of them, try to create an index that will support the subselects as good as possible:
CREATE INDEX ON pjm.prices(pricedate, hour, rtmcc);

Speed up execution of query to find sequential rows that have a changed value

My goal is to go through my dataset, compare each ITEM_NO/LOC day-by-day, and identify days where the VAL has changed from the day before. Right now, I do that by sorting, creating a column of row numbers, joining the table to itself offset by a row, and then only picking rows where VAL has changed.
Each month has about half a billion records. In total there's around 2.7 billion records. The data is stored in DB2 BLU. The table already has indices for ITEM_NO, LOC, and ARCV_DATE. I only have select access to the table.
I think the big bottleneck is the order by in the select statement given that n is so large. One idea I had was to try to do the sorting month-by-month and then union each of the months together.
Here's what I have so far:
with x as (
select ITEM_NO, LOC, ARCV_DATE, VAL, ROW_NUMBER() over (order by ITEM_NO, LOC, ARCV_DATE) as RN
from MY_SCHEMA.MY_TABLE a
where
ARCV_DATE >= '2017-06-01'
and ARCV_DATE < '2017-07-01'
)
SELECT
x.ITEM_NO,
x.LOC,
y.ARCV_DATE as CHANGE_DATE,
y.VAL,
x.VAL as OLD_VAL
FROM x
INNER JOIN x AS y
ON x.rn = y.rn + 1
WHERE
x.VAL <> y.VAL
and x.ITEM_NO = y.ITEM_NO
and x.LOC = y.LOC
What could I do to improve performance on this for such a dataset?
Without any write access your options are very limited because the query isn't that complex. You could try avoiding the join altogether by using LAG() OVER() such as this:
SELECT
*
FROM (
SELECT
ITEM_NO
, LOC
, ARCV_DATE
, VAL
, LAG(ARCV_DATE, 1) OVER (PARTITION BY ITEM_NO, LOC ORDER BY ARCV_DATE DESC) AS CHANGE_DATE
, LAG(VAL, 1) OVER (PARTITION BY ITEM_NO, LOC ORDER BY ARCV_DATE DESC) AS OLD_VAL
FROM MY_SCHEMA.MY_TABLE
WHERE ARCV_DATE >= '2017-06-01'
AND ARCV_DATE < '2017-07-01'
) d
WHERE ( VAL <> OLD_VAL OR OLD_VAL IS NULL )
But tuning this further could require adding or changing indexes.
SELECT currentval.ITEM,
currentval.LOC
currentval.ARCV_DATE currentdate
prevval.ARCV_DATE Previousdate
currentval.val currentval
prevval.val Previousval
FROM MY_SCHEMA.MY_TABLE currentval JOIN
MY_SCHEMA.MY_TABLE prevval ON
currentval.ITEM_NO = prevval.ITEM_NO
WHERE currentval.loc = prevval.loc
AND currentval.val <> prevval.val
AND currentval.ARCV_DATE = prevval.ARCV_DATE+1
AND currentval.ARCV_DATE >= '2017-06-01'
AND prevval.ARCV_DATE < '2017-07-01'
Assuming that values will change from one day to next day. This query will retrieve the values that changes from previous day to current day.
AND currentval.ARCV_DATE = prevval.ARCV_DATE+1