How to calculate daily average from aggregate results with SQL? - sql

I'm working on outputting some data and I want to pull the daily average of some numbers.
As you can see, what I want to do is count the amount of rows received/results(think the row ID) and then divide it against the day value to make the daily average.(30/1) , (64/2) etc I've tried everything, but I keep running into a wall with this.
As it stands, I'm guessing to make this work a sub query of some sort is needed. I just don't know how to get the day(Row id 1,2,3,4 etc) to use for the division.
SELECT calendar_date, SUM(NY_dayscore * cAttendance)
FROM vw_Appointments
WHERE status = 'Confirmed'
Group by calendar_date
Attempted count with distinct, to no avail
SUM(NY_dayscore * cAttendance) ) / count(distinct calendar_date)
My original code is long and cba to post it all. So just attempting to post a small sample code to get guidance on the issue.

In SQL Server 2012+, you would use the cumulative average:
select calendar_date, sum(NY_dayscore * cAttendance),
avg(sum(NY_dayscore * cAttendance)) over (order by calendar_date) as running_average
from vw_appointments a
where status = 'Confirmed'
group by calendar_date
order by calendar_date;
In SQL Server 2008, this is more difficult:
with a as (
select calendar_date, sum(NY_dayscore * cAttendance) as showed
from vw_appointments a
where status = 'Confirmed'
group by calendar_date
)
select a.*, a2.running_average
from a outer apply
(select avg(showed) as running_average
from a a2
where a2.calendar_date <= a.calendar_date
) a2
order by calendar_date;

Is it ROW_NUMBER() that you are missing?
SELECT
calendar_date,
SUM(NY_dayscore * cAttendance) / (ROW_NUMBER() OVER (ORDER BY calendar_date ASC)) AS average
FROM vw_Appointments
WHERE status = 'Confirmed'
GROUP BY calendar_date
ORDER BY calendar_date

I think you need sum(showed) over (..)/row_number() over (..)
WITH Table1(date, showed) AS
(
SELECT '2019-01-02', 30 UNION ALL
SELECT '2019-01-03', 34 UNION ALL
SELECT '2019-01-03', 41 UNION ALL
SELECT '2019-01-04', 48
)
SELECT date,
sum(showed) over (order by date) /
row_number() over (order by date)
as daily_average
FROM Table1
GROUP BY showed, date;
date daily_average
2019-01-02 30
2019-01-03 52
2019-01-03 35
2019-01-04 38
Demo

Related

How to Count Entries on Same Day and Sum Amount based on the Count?

I am attempting to produce Table2 below - which essentially counts the rows that have the same day and adds up the "amount" column for the rows that are on the same day.
I found a solution online that can count entries from the same day, which works:
SELECT
DATE_TRUNC('day', datetime) AS date,
COUNT(datetime) AS date1
FROM Table1
GROUP BY DATE_TRUNC('day', datetime);
It is partially what I am looking for, but I am having difficulty trying to display all the column names.
In my attempt, I have all the columns I want but the Accumulated Count is not accurate since it counts the rows with unique IDs (because I put "id" in GROUP BY):
SELECT *, count(id) OVER(ORDER BY DateTime) as accumulated_count,
SUM(Amount) OVER(ORDER BY DateTime) AS Accumulated_Amount
FROM Table1
GROUP BY date(datetime), id
I've been working on this for days and seemingly have come across every possible outcome that is not what I am looking for. Does anyone have an idea as to what I'm missing here?
Cumulative sum and count should be calculated for each day
with Table1 (id,datetime,client,product,amount) as(values
(1 ,to_timestamp('2020-07-08 07:30:10','YYYY-MM-DD HH24:MI:SS'),'Tom','Bill Payment',24),
(2 ,to_timestamp('2020-07-08 07:50:30','YYYY-MM-DD HH24:MI:SS'),'Tom','Bill Payment',27),
(3 ,to_timestamp('2020-07-09 08:20:10','YYYY-MM-DD HH24:MI:SS'),'Tom','Bill Payment',37)
)
SELECT
Table1.*,
count(*) over (partition by DATE_TRUNC('day', datetime)
order by datetime asc ) accumulated_count,
sum(amount) over (partition by DATE_TRUNC('day', datetime) order by datetime asc) accumulated_sum
FROM Table1;
Not to familiar with postgresql but this does what you ask fror.
with data (id,date_time,client,product,amount) as(
select 1 ,to_timestamp('Jul 08 2020, 07:30:10','Mon DD YYYY, HH24:MI:SS'),'Tom','Bill',24 Union all
select 2 ,to_timestamp('Jul 08 2020, 07:50:30','Mon DD YYYY, HH24:MI:SS'),'Tom','Bill',27 Union all
select 3 ,to_timestamp('Jul 09 2020, 08:20:10','Mon DD YYYY, HH24:MI:SS'),'Tom','Bill',37
)
select d.id,d.date_time,d.client,d.product,d.amount,
(select count(*) from data d1
where d1.date_time <= d.date_time and date(d1.date_time) = date(d.date_time) ) acc_count,
(select sum(amount) from data d1
where d1.date_time <= d.date_time and date(d1.date_time) = date(d.date_time) ) acc_amount
from data d

Past 7 days running amounts average as progress per each date

So, the query is simple but i am facing issues in implementing the Sql logic. Heres the query suppose i have records like
Phoneno Company Date Amount
83838 xyz 20210901 100
87337 abc 20210902 500
47473 cde 20210903 600
Output expected is past 7 days progress as running avg of amount for each date (current date n 6 days before)
Date amount avg
20210901 100 100
20210902 500 300
20210903 600 400
I tried
Select date, amount, select
avg(lg) from (
Select case when lag(amount)
Over (order by NULL) IS NULL
THEN AMOUNT
ELSE
lag(amount)
Over (order by NULL) END AS LG)
From table
WHERE DATE>=t.date-7) as avg
From table t;
But i am getting wrong avg values. Could anyone please help?
Note: Ive tried without lag too it results the wrong avgs too
You could use a self join to group the dates
select distinct
a.dt,
b.dt as preceding_dt, --just for QA purpose
a.amt,
b.amt as preceding_amt,--just for QA purpose
avg(b.amt) over (partition by a.dt) as avg_amt
from t a
join t b on a.dt-b.dt between 0 and 6
group by a.dt, b.dt, a.amt, b.amt; --to dedupe the data after the join
If you want to make your correlated subquery approach work, you don't really need the lag.
select dt,
amt,
(select avg(b.amt) from t b where a.dt-b.dt between 0 and 6) as avg_lg
from t a;
If you don't have multiple rows per date, this gets even simpler
select dt,
amt,
avg(amt) over (order by dt rows between 6 preceding and current row) as avg_lg
from t;
Also the condition DATE>=t.date-7 you used is left open on one side meaning it will qualify a lot of dates that shouldn't have been qualified.
DEMO
You can use analytical function with the windowing clause to get your results:
SELECT DISTINCT BillingDate,
AVG(amount) OVER (ORDER BY BillingDate
RANGE BETWEEN TO_DSINTERVAL('7 00:00:00') PRECEDING
AND TO_DSINTERVAL('0 00:00:00') FOLLOWING) AS RUNNING_AVG
FROM accounts
ORDER BY BillingDate;
Here is a DBFiddle showing the query in action (LINK)

Running Total in SQL including last year end total

Need a query to select "Running Total" as mentioned in the image
2017 year end total plus Every months new figure should add up to previous total.
https://i.stack.imgur.com/DL7p0.png "Example"
This following script will work for MSSQL and you can use the same logic for other databases as well-
WITH your_table(year,month,partersgrowth)
AS
(
SELECT '2019','jan', 100 UNION ALL
SELECT '2019','feb', 300 UNION ALL
SELECT '2019','mar', 400 UNION ALL
SELECT '2019','apr', 500 UNION ALL
SELECT '2018','Dec', 200
)
SELECT A.year,A.month,A.partersgrowth,
(
SELECT SUM(B.partersgrowth)
FROM your_table B
WHERE CAST(B.Year +'-'+B.month+'-01' AS DATE)
<= CAST(A.Year +'-'+A.month+'-01' AS DATE)
) Running_Total
FROM your_table A
ORDER BY CAST(A.Year +'-'+A.month+'-01' AS DATE)
using #mkRabbani solution.. you can simplify it like this:
;WITH your_table(year,month,partersgrowth)
AS
(
SELECT '2019','jan', 100 UNION ALL
SELECT '2019','feb', 300 UNION ALL
SELECT '2019','mar', 400 UNION ALL
SELECT '2019','apr', 500 UNION ALL
SELECT '2018','Dec', 200
)
select *, sum(partersgrowth) over ( order by [year],[month]) as running_total
from your_table
EDIT: As pointed out by comment below.. you want to order by a proper date in the sum part ( I would use order by year and then the month number rather than the month name)
If you use MSSQL you can run the following code
with cte as (
select t.*, lag(partersGrowth) over(partition by year order by month asc rows ROWS UNBOUNDED PRECEDING) prevTotal
from Table )
select years, month, partersGrowth, prevTotal || '+' partersGrowth as "Need Running Total"
from cte

sql return 1st day of each month in table

I have a sql table like so with two columns...
3/1/17 100
3/2/17 200
3/3/17 300
4/3/17 600
4/4/17 700
4/5/17 800
I am trying to run a query that returns the 1st day of each month in that above table, and grab the corresponding value.
results should be
3/1/17 100
4/3/17 600
then once I have these results... do something with each one.
any ideas how I can get started?
In standard SQL, you would use row_number():
select t.*
from (select t.*,
row_number() over (partition by extract(year from dte), extract(month from dte)
order by dte asc) as seqnum
from t
) t
where seqnum = 1;
Most databases support this functionality, but the exact functions (particularly for dates) may differ depending on the database.
An alternative (SQL Server flavour):
SELECT t.*
FROM YourTable t
JOIN (
select MIN(DateColumn) as MinimumDate
from YourTable
group by FORMAT(DateColumn,'yyyyMM')
) q on (t.DateColumn = q.MinimumDate)
ORDER BY t.DateColumn;
For the GROUP BY this will also be fine:
group by YEAR(DateColumn), MONTH(DateColumn)
or
group by DATEPART(YEAR,DateColumn), DATEPART(MONTH,DateColumn)

How can I count users in a month that were not present in the month before?

I am trying to count unique users on a monthly basis that were not present in the previous month. So if a user has a record for January and then another one for February, then I would only count January for that user.
user_id time
a1 1/2/17
a1 2/10/17
a2 2/18/17
a4 2/5/17
a5 3/25/17
My results should look like this
Month User Count
January 1
February 2
March 1
I'm not really familiar with BigQuery, but here's how I would solve the problem using TSQL. I imagine that you'd be able to use similar logic in BigQuery.
1). Order the data by user_id first, and then time. In TSQL, you can accomplish this with the following and store it in a common table expression, which you will query in the step after this.
;WITH cte AS
(
select ROW_NUMBER() OVER (PARTITION BY [user_id] ORDER BY [time]) AS rn,*
from dbo.employees
)
2). Next query for only the rows with rn = 1 (the first occurrence for a particular user) and group by the month.
select DATENAME(month, [time]) AS [Month], count(*) AS user_count
from cte
where rn = 1
group by DATENAME(month, [time])
This is assuming that 2017 is the only year you're dealing with. If you're dealing with more than one year, you probably want step #2 to look something like this:
select year([time]) as [year], DATENAME(month, [time]) AS [month],
count(*) AS user_count
from cte
where rn = 1
group by year([time]), DATENAME(month, [time])
First aggregate by the user id and the month. Then use lag() to see if the user was present in the previous month:
with du as (
select date_trunc(time, month) as yyyymm, user_id
from t
group by date_trunc(time, month)
)
select yyyymm, count(*)
from (select du.*,
lag(yyyymm) over (partition by user_id order by yyyymm) as prev_yyyymm
from du
) du
where prev_yyyymm is not null or
prev_yyyymm < date_add(yyyymm, interval 1 month)
group by yyyymm;
Note: This uses the date functions, but similar functions exist for timestamp.
The way I understood question is - to exclude user to be counted in given month only if same user presented in previous month. But if same user present in few months before given, but not in previous - user should be counted.
If this is correct - Try below for BigQuery Standard SQL
#standardSQL
SELECT Year, Month, COUNT(DISTINCT user_id) AS User_Count
FROM (
SELECT *,
DATE_DIFF(time, LAG(time) OVER(PARTITION BY user_id ORDER BY time), MONTH) AS flag
FROM (
SELECT
user_id,
DATE_TRUNC(PARSE_DATE('%x', time), MONTH) AS time,
EXTRACT(YEAR FROM PARSE_DATE('%x', time)) AS Year,
FORMAT_DATE('%B', PARSE_DATE('%x', time)) AS Month
FROM yourTable
GROUP BY 1, 2, 3, 4
)
)
WHERE IFNULL(flag, 0) <> 1
GROUP BY Year, Month, time
ORDER BY time
you can test / play with above using below example with dummy data from your question
#standardSQL
WITH yourTable AS (
SELECT 'a1' AS user_id, '1/2/17' AS time UNION ALL
SELECT 'a1', '2/10/17' UNION ALL
SELECT 'a2', '2/18/17' UNION ALL
SELECT 'a4', '2/5/17' UNION ALL
SELECT 'a5', '3/25/17'
)
SELECT Year, Month, COUNT(DISTINCT user_id) AS User_Count
FROM (
SELECT *,
DATE_DIFF(time, LAG(time) OVER(PARTITION BY user_id ORDER BY time), MONTH) AS flag
FROM (
SELECT
user_id,
DATE_TRUNC(PARSE_DATE('%x', time), MONTH) AS time,
EXTRACT(YEAR FROM PARSE_DATE('%x', time)) AS Year,
FORMAT_DATE('%B', PARSE_DATE('%x', time)) AS Month
FROM yourTable
GROUP BY 1, 2, 3, 4
)
)
WHERE IFNULL(flag, 0) <> 1
GROUP BY Year, Month, time
ORDER BY time
The output is
Year Month User_Count
2017 January 1
2017 February 2
2017 March 1
Try this query:
SELECT
t1.d,
count(DISTINCT t1.user_id)
FROM
(
SELECT
EXTRACT(MONTH FROM time) AS d,
--EXTRACT(MONTH FROM time)-1 AS d2,
user_id
FROM nbitra.tmp
) t1
LEFT JOIN
(
SELECT
EXTRACT(MONTH FROM time) AS d,
user_id
FROM nbitra.tmp
) t2
ON t1.d = t2.d+1
WHERE
(
t1.user_id <> t2.user_id --User is in previous month
OR t2.user_id IS NULL --To handle january, since there is no previous month to compare to
)
GROUP BY t1.d;