Trying to Get Running Total to Work - sql

I have a query that works perfectly for combining ElapsedTime that is considered Non-Productive when NonProductive = 1. However, I have been trying to get a running total to work. This the main query that totals by ReportNo for each day:
Select SUM(CASE
When NonProductive = 1 Then ElapsedTime
Else 0
End)
From DailyOperations
Where (DailyOperations.WellID = 'ZCQ-5') AND (DailyOperations.JobID = 'Original') and (ReportNo = 9)
ReportNo = 9 is the first Reportno that has NonProductive time which is 4. The next is ReportNo = 14. It has 5.5 hours of NonProductive time. So when I run ReportNo 14 I am hoping to see a total of 9.5 and nothing else. Below is the query that I am using for my running total but it is listing all of the Non-Productive time. So Instead of getting just 9.5 for ReportNo 14 I am also getting a running total for each instance of NonProductive time in the report:
SELECT (ElapsedTime),(Reportno),NonProductive,
SUM(ElapsedTime) OVER (PARTITION BY NonProductive ORDER BY REPORTNO ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
AS RUNNINGTOTAL
FROM dbo.DailyOperations
WHERE (NonProductive IN(1)) and (WellID = 'ZCQ-5') AND (JobID = 'Original')
Group by ReportNo,ElapsedTime,NonProductive
Order by ReportNo
This gives me:
ReportNo RUNNINGTOTAL
9 4
14 6
14 9.5
What I want is:
ReportNo RUNNINGTOTAL
9 4
14 9.5

I think you want:
SELECT Reportno,NonProductive,
SUM(SUM(ElapsedTime)) OVER (PARTITION BY NonProductive ORDER BY REPORTNO) AS RUNNINGTOTAL
FROM dbo.DailyOperations o
WHERE NonProductive IN (1) and WellID = 'ZCQ-5' AND JobID = 'Original')
GROUP BY ReportNo
ORDER BY ReportNo;
Notes:
The GROUP BY defines each row that you want in the result set. Hence, you only want ReportNo in it.
When combining window functions and aggregation functions, you sometimes end up with strange looking constructs such as SUM(SUM()).
The windowing clause is unnecessary. What you have is basically the default when you use ORDER BY (actually, the default is RANGE BETWEEN, but ReportId is unique so the two are equivalent).

Related

SQL - Calculate percentage by group, for multiple groups

I have a table in GBQ in the following format :
UserId Orders Month
XDT 23 1
XDT 0 4
FKR 3 6
GHR 23 4
... ... ...
It shows the number of orders per user and month.
I want to calculate the percentage of users who have orders, I did it as following :
SELECT
HasOrders,
ROUND(COUNT(*) * 100 / CAST( SUM(COUNT(*)) OVER () AS float64), 2) Parts
FROM (
SELECT
*,
CASE WHEN Orders = 0 THEN 0 ELSE 1 END AS HasOrders
FROM `Table` )
GROUP BY
HasOrders
ORDER BY
Parts
It gives me the following result:
HasOrders Parts
0 35
1 65
I need to calculate the percentage of users who have orders, by month, in a way that every month = 100%
Currently to do this I execute the query once per month, which is not practical :
SELECT
HasOrders,
ROUND(COUNT(*) * 100 / CAST( SUM(COUNT(*)) OVER () AS float64), 2) Parts
FROM (
SELECT
*,
CASE WHEN Orders = 0 THEN 0 ELSE 1 END AS HasOrders
FROM `Table` )
WHERE Month = 1
GROUP BY
HasOrders
ORDER BY
Parts
Is there a way execute a query once and have this result ?
HasOrders Parts Month
0 25 1
1 75 1
0 45 2
1 55 2
... ... ...
SELECT
SIGN(Orders),
ROUND(COUNT(*) * 100.000 / SUM(COUNT(*), 2) OVER (PARTITION BY Month)) AS Parts,
Month
FROM T
GROUP BY Month, SIGN(Orders)
ORDER BY Month, SIGN(Orders)
Demo on Postgres:
https://dbfiddle.uk/?rdbms=postgres_10&fiddle=4cd2d1455673469c2dfc060eccea8020
You've stated that it's important for the total to be 100% so you might consider rounding down in the case of no orders and rounding up in the case of has orders for those scenarios where the percentages falls precisely on an odd multiple of 0.5%. Or perhaps rounding toward even or round smallest down would be better options:
WITH DATA AS (
SELECT SIGN(Orders) AS HasOrders, Month,
COUNT(*) * 10000.000 / SUM(COUNT(*)) OVER (PARTITION BY Month) AS PartsPercent
FROM T
GROUP BY Month, SIGN(Orders)
ORDER BY Month, SIGN(Orders)
)
select HasOrders, Month, PartsPercent,
PartsPercent - TRUNCATE(PartsPercent) AS Fraction,
CASE WHEN HasOrders = 0
THEN FLOOR(PartsPercent) ELSE CEILING(PartsPercent)
END AS PartsRound0Down,
CASE WHEN PartsPercent - TRUNCATE(PartsPercent) = 0.5
AND MOD(TRUNCATE(PartsPercent), 2) = 0
THEN FLOOR(PartsPercent) ELSE ROUND(PartsPercent) -- halfway up
END AS PartsRoundTowardEven,
CASE WHEN PartsPercent - TRUNCATE(PartsPercent) = 0.5 AND PartsPercent < 50
THEN FLOOR(PartsPercent) ELSE ROUND(PartsPercent) -- halfway up
END AS PartsSmallestTowardZero
from DATA
It's usually not advisable to test floating-point values for equality and I don't know how BigQuery's float64 will work with the comparison against 0.5. One half is nevertheless representable in binary. See these in a case where the breakout is 101 vs 99. I don't have immediate access to BigQuery so be aware that Postgres's rounding behavior is different:
https://dbfiddle.uk/?rdbms=postgres_10&fiddle=c8237e272427a0d1114c3d8056a01a09
Consider below approach
select hasOrders, round(100 * parts, 2) as parts, month from (
select month,
countif(orders = 0) / count(*) `0`,
countif(orders > 0) / count(*) `1`,
from your_table
group by month
)
unpivot (parts for hasOrders in (`0`, `1`))
with output like below

SQL - Calculate the difference in number of orders by month

I am working on the orders table provided by this site, it has its own editor where you can test your SQL statements.
The order table looks like this
order_id
customer_id
order_date
1
7000
2016/04/18
2
5000
2016/04/18
3
8000
2016/04/19
4
4000
2016/04/20
5
NULL
2016/05/01
I want to get the difference in the number of orders for subsequent months.
To elaborate, the number of orders each month would be like this
SQL Statement
SELECT
MONTH(order_date) AS Month,
COUNT(MONTH(order_date)) AS Total_Orders
FROM
orders
GROUP BY
MONTH(order_date)
Result:
Month
Total_Orders
4
4
5
1
Now my goal is to get the difference in subsequent months which would be
Month
Total_Orders
Total_Orders_Diff
4
4
4 - Null = Null
5
1
1 - 4 = -3
My strategy was to self-join following this answer
This was my attempt
SELECT
MONTH(a.order_date),
COUNT(MONTH(a.order_date)),
COUNT(MONTH(b.order_date)) - COUNT(MONTH(a.order_date)) AS prev,
MONTH(b.order_date)
FROM
orders a
LEFT JOIN
orders b ON MONTH(a.order_date) = MONTH(b.order_date) - 1
GROUP BY
MONTH(a.order_date)
However, the result was just zeros (as shown below) which suggests that I am just subtracting from the same value rather than from the previous month (or subtracting from a null value)
MONTH(a.order_date)
COUNT(MONTH(a.order_date))
prev
MONTH(b.order_date)
4
4
0
NULL
5
1
0
NULL
Do you have any suggestions as to what I am doing wrong?
You have to use LAG window function in your SELECT statement.
LAG provides access to a row at a given physical offset that comes
before the current row.
So, this is what you need:
SELECT
MONTH(order_date) as Month,
COUNT(MONTH(order_date)) as Total_Orders,
COUNT(MONTH(order_date)) - (LAG (COUNT(MONTH(order_date))) OVER (ORDER BY (SELECT NULL))) AS Total_Orders_Diff
FROM orders
GROUP BY MONTH(order_date);
Here in an example on the SQL Fiddle: http://sqlfiddle.com/#!18/5ed75/1
Solution without using LAG window function:
WITH InitCTE AS
(
SELECT MONTH(order_date) AS Month,
COUNT(MONTH(order_date)) AS Total_Orders
FROM orders
GROUP BY MONTH(order_date)
)
SELECT InitCTE.Month, InitCTE.Total_Orders, R.Total_Orders_Diff
FROM InitCTE
OUTER APPLY (SELECT TOP 1 InitCTE.Total_Orders - CompareCTE.Total_Orders AS Total_Orders_Diff
FROM InitCTE AS CompareCTE
WHERE CompareCTE.Month < InitCTE.Month) R;
Something like the following should give you what you want - disclaimer, untested!
select *, Total_Orders - lag(Total_orders,1) over(order by Month) as Total_Orders_Diff
from (
select Month(order_date) as Month, Count(*) as Total_Orders
From orders
Group by Month(order_date)
)o

Improving the performance of a query

My background is Oracle but we've moved to Hadoop on AWS and I'm accessing our logs using Hive SQL. I've been asked to return a report where the number of high severity errors on the system of any given type exceeds 9 in any rolling period of 30 days (9 but I use 2 in the example to keep the example data volumes down) by uptime. I've written code to do this but I don't really understand performance tuning in Hive. A lot of the stuff I learned in Oracle doesn't seem applicable.
Can this be improved?
Data is roughly
CREATE TABLE LOG_TABLE
(SYSTEM_ID VARCHAR(1),
EVENT_TYPE VARCHAR(2),
EVENT_ID VARCHAR(3),
EVENT_DATE DATE,
UPTIME INT);
INSERT INOT LOG_TABLE
VALUES
('1','A1','138','2018-10-29',34),
('1','A2','146','2018-11-13',49),
('1','A3','140','2018-11-02',38),
('1','B1','130','2018-10-13',18),
('1','B1','150','2018-11-19',55),
('1','B2','137','2018-10-27',32),
('2','A1','128','2018-10-11',59),
('2','A1','131','2018-10-16',64),
('2','A1','136','2018-10-25',73),
('2','A2','139','2018-10-31',79),
('2','A2','145','2018-11-11',90),
('2','A2','147','2018-11-14',93),
('2','A3','135','2018-10-24',72),
('2','B1','124','2018-10-03',51),
('2','B1','133','2018-10-19',67),
('2','B2','134','2018-10-22',70),
('2','B2','142','2018-11-06',85),
('2','B2','148','2018-11-15',94),
('2','B2','149','2018-11-17',96),
('3','A2','127','2018-10-10',122),
('3','A3','123','2018-10-01',113),
('3','A3','125','2018-10-06',118),
('3','A3','126','2018-10-07',119),
('3','A3','141','2018-11-05',148),
('3','A3','144','2018-11-10',153),
('3','B1','132','2018-10-18',130),
('3','B1','143','2018-11-08',151),
('3','B2','129','2018-10-12',124);
and code that works is as follows. I do a self join on the log table to return all the records with the gap between them and include those with a gap of 30 days or less. I then select those where there are more than 2 events into a second cte and from these I count distinct event types and event ids by system and uptime range
WITH EVENTGAP AS
(SELECT T1.EVENT_TYPE,
T1.SYSTEM_ID,
T1.EVENT_ID,
T2.EVENT_ID AS EVENT_ID2,
T1.EVENT_DATE,
T2.EVENT_DATE AS EVENT_DATE2,
T1.UPTIME,
DATEDIFF(T2.EVENT_DATE,T1.EVENT_DATE) AS EVENT_GAP
FROM LOG_TABLE T1
INNER JOIN LOG_TABLE T2
ON (T1.EVENT_TYPE=T2.EVENT_TYPE
AND T1.SYSTEM_ID=T2.SYSTEM_ID)
WHERE DATEDIFF(T2.EVENT_DATE,T1.EVENT_DATE) BETWEEN 0 AND 30
AND T1.UPTIME BETWEEN 0 AND 299
AND T2.UPTIME BETWEEN 0 AND 330),
EVENTCOUNT
AS (SELECT EVENT_TYPE,
SYSTEM_ID,
EVENT_ID,
EVENT_DATE,
COUNT(1)
FROM EVENTGAP
GROUP BY EVENT_TYPE,
SYSTEM_ID,
EVENT_ID,
EVENT_DATE
HAVING COUNT(1)>2)
SELECT EVENTGAP.SYSTEM_ID,
CASE WHEN FLOOR(UPTIME/50) = 0 THEN '0-49'
WHEN FLOOR(UPTIME/50) = 1 THEN '50-99'
WHEN FLOOR(UPTIME/50) = 2 THEN '100-149'
WHEN FLOOR(UPTIME/50) = 3 THEN '150-199'
WHEN FLOOR(UPTIME/50) = 4 THEN '200-249'
WHEN FLOOR(UPTIME/50) = 5 THEN '250-299' END AS UPTIME_BAND,
COUNT(DISTINCT EVENTGAP.EVENT_ID2) AS EVENT_COUNT,
COUNT(DISTINCT EVENTGAP.EVENT_TYPE) AS TYPE_COUNT
FROM EVENTGAP
WHERE EVENTGAP.EVENT_ID IN (SELECT DISTINCT EVENTCOUNT.EVENT_ID FROM EVENTCOUNT)
GROUP BY EVENTGAP.SYSTEM_ID,
CASE WHEN FLOOR(UPTIME/50) = 0 THEN '0-49'
WHEN FLOOR(UPTIME/50) = 1 THEN '50-99'
WHEN FLOOR(UPTIME/50) = 2 THEN '100-149'
WHEN FLOOR(UPTIME/50) = 3 THEN '150-199'
WHEN FLOOR(UPTIME/50) = 4 THEN '200-249'
WHEN FLOOR(UPTIME/50) = 5 THEN '250-299' END
This gives the following result, which should be unique counts of event ids and event types that have 3 or more events falling in any rolling 30 day period. Some events may be in more than one period but will only be counted once.
EVENTGAP.SYSTEM_ID UPTIME_BAND EVENT_COUNT TYPE_COUNT
2 50-99 10 3
3 100-149 4 1
In both Hive and Oracle, you would want to do this using window functions, using a window frame clause. The exact logic is different in the two databases.
In Hive you can use range between if you convert event_date to a number. A typical method is to subtract a fixed value from it. Another method is to use unix timestamps:
select lt.*
from (select lt.*,
count(*) over (partition by event_type
order by unix_timestamp(event_date)
range between 60*24*24*30 preceding and current row
) as rolling_count
from log_table lt
) lt
where rolling_count >= 2 -- or 9

Creating average for specific timeframe

I'm setting up a time series with each row = 1 hr.
The input data has sometimes multiple values per hour. This can vary.
Right now the specific code looks like this:
select
patientunitstayid
, generate_series(ceil(min(nursingchartoffset)/60.0),
ceil(max(nursingchartoffset)/60.0)) as hr
, avg(case when nibp_systolic >= 1 and nibp_systolic <= 250 then
nibp_systolic else null end) as nibp_systolic_avg
from nc
group by patientunitstayid
order by patientunitstayid asc;
and generates this data:
It takes the average of the entire time series for each patient instead of taking it for each hour. How can I fix this?
I'm expecting something like this:
select nc.patientunitstayid, gs.hr,
avg(case when nc.nibp_systolic >= 1 and nc.nibp_systolic <= 250
then nibp_systolic
end) as nibp_systolic_avg
from (select nc.*,
min(nursingchartoffset) over (partition by patientunitstayid) as min_nursingchartoffset,
max(nursingchartoffset) over (partition by patientunitstayid) as max_nursingchartoffset
from nc
) nc cross join lateral
generate_series(ceil(min_nursingchartoffset/60.0),
ceil(max_nursingchartoffset/60.0)
) as gs(hr)
group by nc.patientunitstayid, hr
order by nc.patientunitstayid asc, hr asc;
That is, you need to be aggregating by hr. I put this into the from clause, to highlight that this generates rows. If you are using an older version of Postgres, then you might not have lateral joins. If so, just use a subquery in the from clause.
EDIT:
You can also try:
from (select nc.*,
generate_series(ceil(min(nursingchartoffset) over (partition by patientunitstayid) / 60.0),
ceil(max(nursingchartoffset) over (partition by patientunitstayid)/ 60.0)
) hr
from nc
) nc
And adjust the references to hr in the outer query.

Calculate moving weather stats in PostgreSQL

I'm trying to calculate the days since last rain and the amount of rain in that event for each day in my PostgreSQL table of weather data. I've been trying to achieve this with window functions but the limitation of ranges having to be unbounded has left me a bit stuck on how to proceed.
Here's the query I have so far:
SELECT
station_num,
ob_date,
rain,
max(rain) OVER (PARTITION BY station_num ORDER BY ob_date ASC RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) as prev_rain_mm,
'' as days_since_rain --haven't attempted this calculation yet
FROM
obs_daily_ground_moisture
This results in the following:
but I'm trying to achieve something more like this:
I feel like all the pieces are there in regards to window functions range & filter and nested queries but I'm not sure how to pull it all together. Also the above data is just a subset of the actual dataset, the entire dataset is just over half a million rows.
The key here is to group the observations starting from the first occurrence of rain>0 value to the next occurrence of rain>0 value. Thereafter you can use window functions to calculate the needed columns.
select
x.station_num,
x.ob_date,
max(rain) over(partition by station_num,col) prev_rain,
case when rain > 0 then 0
else row_number() over(partition by station_num, col order by ob_date)-1 end days_since_rain
from (select t.*,
sum(case when rain > 0 then 1 else 0 end) over(partition by station_num order by ob_date) col
from t) x
Sample Demo
try this.
DECLARE #Rain AS FLOAT
UPDATE A
SET
#Rain = CASE WHEN A.Rain = 0 THEN #Rain ELSE A.Rain END,
A.Rain = CASE WHEN #Rain IS NULL OR A.Rain <> 0 THEN A.Rain ELSE #Rain END
FROM obs_daily_ground_moisture A
SELECT ob_date, Rain,
max(rain) OVER (PARTITION BY station_num ORDER BY ob_date ASC RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) as prev_rain_mm,
ROW_NUMBER() OVER(PARTITION BY Rain ORDER BY ob_date) - 1 as days_since_rain
FROM obs_daily_ground_moisture ORDER BY ob_date