Get greater (subquery ) list than the AVG (subquery) in SQLite3 - sql

consider the following table:
covid_data(
CASES INT,
DEATHS INT,
COUNTRIES VARCHAR(64),
);
I am trying to get the names of the countries which the mortality rate is greater than the AVG mortality rate. The formula I am using to get the number of deaths based on every 1000 cases is:
(NUMBER OF DEATHS / NUMBER OF CASES) * 1000
To get the AVG I use this query:
SELECT AVG(rate)
FROM (
SELECT CAST(SUM(deaths) AS FLOAT) / SUM(cases) * 1000 AS rate
FROM covid_data
) covid_data;
To list the countries with a greater rate than this AVG this is one of the many attempts I have tried so far.
SELECT countries, CAST(SUM(deaths) AS FLOAT) / SUM(cases) * 1000 AS RATEM
FROM covid_data
GROUP BY countries
HAVING RATEM > (SELECT AVG(RATE)
FROM (
SELECT CAST(SUM(DEATHS) AS FLOAT) / SUM(CASES) * 1000 AS RATE
FROM covid_data
) covid_data);
This is returning an error: no such column: RATEM
As you can see I am struggling with this basic concepts I would appreciate as well any books/courses/resources to better understand this relations.

You can use window functions:
SELECT cd.country
FROM (SELECT cd.*,
SUM(deaths * 1.0) OVER () / SUM(cases) OVER () as mortality_ratio
FROM covid_data
) cd
WHERE (deaths * 1.0 / NULLIF(cases, 0)) > mortality_ratio;
Note that the average of the mortality ratio in each country is NOT the same as the overall mortality ratio. I think you understand this but I just want to emphasize that point. The average ratio would be:
AVG(deaths * 1.0 / NULLIF(cases, 0))

You could use window functions:
select t.*
from (
select
t.*,
1.0 * deaths / cases rate,
1.0 * sum(deaths) over() / sum(cases) over() avg_rate
from covid_date
) t
where rate > avg_rate

Related

SQL - Calculate percentage by group, for multiple groups

I have a table in GBQ in the following format :
UserId Orders Month
XDT 23 1
XDT 0 4
FKR 3 6
GHR 23 4
... ... ...
It shows the number of orders per user and month.
I want to calculate the percentage of users who have orders, I did it as following :
SELECT
HasOrders,
ROUND(COUNT(*) * 100 / CAST( SUM(COUNT(*)) OVER () AS float64), 2) Parts
FROM (
SELECT
*,
CASE WHEN Orders = 0 THEN 0 ELSE 1 END AS HasOrders
FROM `Table` )
GROUP BY
HasOrders
ORDER BY
Parts
It gives me the following result:
HasOrders Parts
0 35
1 65
I need to calculate the percentage of users who have orders, by month, in a way that every month = 100%
Currently to do this I execute the query once per month, which is not practical :
SELECT
HasOrders,
ROUND(COUNT(*) * 100 / CAST( SUM(COUNT(*)) OVER () AS float64), 2) Parts
FROM (
SELECT
*,
CASE WHEN Orders = 0 THEN 0 ELSE 1 END AS HasOrders
FROM `Table` )
WHERE Month = 1
GROUP BY
HasOrders
ORDER BY
Parts
Is there a way execute a query once and have this result ?
HasOrders Parts Month
0 25 1
1 75 1
0 45 2
1 55 2
... ... ...
SELECT
SIGN(Orders),
ROUND(COUNT(*) * 100.000 / SUM(COUNT(*), 2) OVER (PARTITION BY Month)) AS Parts,
Month
FROM T
GROUP BY Month, SIGN(Orders)
ORDER BY Month, SIGN(Orders)
Demo on Postgres:
https://dbfiddle.uk/?rdbms=postgres_10&fiddle=4cd2d1455673469c2dfc060eccea8020
You've stated that it's important for the total to be 100% so you might consider rounding down in the case of no orders and rounding up in the case of has orders for those scenarios where the percentages falls precisely on an odd multiple of 0.5%. Or perhaps rounding toward even or round smallest down would be better options:
WITH DATA AS (
SELECT SIGN(Orders) AS HasOrders, Month,
COUNT(*) * 10000.000 / SUM(COUNT(*)) OVER (PARTITION BY Month) AS PartsPercent
FROM T
GROUP BY Month, SIGN(Orders)
ORDER BY Month, SIGN(Orders)
)
select HasOrders, Month, PartsPercent,
PartsPercent - TRUNCATE(PartsPercent) AS Fraction,
CASE WHEN HasOrders = 0
THEN FLOOR(PartsPercent) ELSE CEILING(PartsPercent)
END AS PartsRound0Down,
CASE WHEN PartsPercent - TRUNCATE(PartsPercent) = 0.5
AND MOD(TRUNCATE(PartsPercent), 2) = 0
THEN FLOOR(PartsPercent) ELSE ROUND(PartsPercent) -- halfway up
END AS PartsRoundTowardEven,
CASE WHEN PartsPercent - TRUNCATE(PartsPercent) = 0.5 AND PartsPercent < 50
THEN FLOOR(PartsPercent) ELSE ROUND(PartsPercent) -- halfway up
END AS PartsSmallestTowardZero
from DATA
It's usually not advisable to test floating-point values for equality and I don't know how BigQuery's float64 will work with the comparison against 0.5. One half is nevertheless representable in binary. See these in a case where the breakout is 101 vs 99. I don't have immediate access to BigQuery so be aware that Postgres's rounding behavior is different:
https://dbfiddle.uk/?rdbms=postgres_10&fiddle=c8237e272427a0d1114c3d8056a01a09
Consider below approach
select hasOrders, round(100 * parts, 2) as parts, month from (
select month,
countif(orders = 0) / count(*) `0`,
countif(orders > 0) / count(*) `1`,
from your_table
group by month
)
unpivot (parts for hasOrders in (`0`, `1`))
with output like below

Calculating % of COUNT with groupby function in bigquery

Running into some issues figuring out how to add in an extra column that will give me the percentage of the total of the aggregate of the count function. The query I have looks like this:
Select
count(*) AS num_rides,
member_casual
FROM `2020_bikeshare_data`
GROUP BY member_casual
ORDER BY num_rides DESC
And returns me this result:
num_rides
member_casual
2134988
member
1341217
casual
And what I'd like to do is add a 3rd column that lists the percent of the total each membership makes up
num_rides
member_casual
perc_tot
2134988
member
61.4%
1341217
casual
38.6
thoughts?
You window functions:
SELECT member_casual,
COUNT(*) AS num_rides,
COUNT(*) * 1.0 / SUM(COUNT(*)) OVER ()
FROM `2020_bikeshare_data`
GROUP BY member_casual
ORDER BY num_rides DESC;
No subquery is needed.
Consider below approach
select distinct member_casual,
count(num_rides) over type as num_rides,
round(count(num_rides) over type * 100.0 / count(num_rides) over(), 2) as perc_tot
from `2020_bikeshare_data`
window type as (partition by member_casual)
# order by num_rides desc
if applied to sample data in your question - output is
The simplest way is use a subquery as part of the column expression to calculate your percentage:
select
count(1) as num_rides,
member_casual,
sum(100) / (select sum(1.0) from `2020_bikeshare_data`) as perc_tot -- return as percentage
from
`2020_bikeshare_data`
group by
member_casual
Using the subquery, get the total number of rows and calculate the percentage accordingly.
Select
count(*) AS num_rides,
member_casual,
Concat(count(*) * 100 / totalRecord,' %') as perc_tot
FROM (SELECT *,COUNT(*) as totalRecord FROM `2020_bikeshare_data`)
GROUP BY member_casual
or
Select
count(*) AS num_rides,
member_casual,
Concat(count(*) * 100 / (SELECT COUNT(*) FROM `2020_bikeshare_data`) ,' %') as perc_tot
FROM `2020_bikeshare_data`
GROUP BY member_casual
In addition to the other answers, you can also break this down into simple SQL (without window functions) by organizing with CTEs.
with
data as (select * from `2020_bikeshare_data`),
total as (select count(*) as ride_count from data),
by_type as (select member_casual, count(*) as ride_count from data group by 1)
select
member_casual,
by_type.ride_count as num_rides,
by_type.ride_count / total.ride_count as perc_tot
from by_type
cross join total
In my opinion, this is much easier to see the perc_tot calculation.

Using SQLite, how can I calculate the maximum year on year growth rate for each year?

I am learning about SQL and I am doing a practice exercise called World Populations SQL Practice on Codecademy. There is one table with three columns: country, population, and year. I am interested in calculating the country with the maximum year-on-year growth rate each year. (This wasn't suggested by Codecademy, I just think it's an interesting idea).
I can calculate all of the year-on-year growth rates with this query:
SELECT country,
100.0 * ((SELECT population FROM population_years AS p2
WHERE p2.year = p1.year + 1
AND p2.country = p1.country)
- population) / population AS year_on_year_growth,
year
FROM population_years AS p1
WHERE year_on_year_growth IS NOT NULL
ORDER BY year_on_year_growth;
and I can calculate the maximum year-on-year growth rate for a particular year, such as 2005, with a query such as this:
SELECT country,
100.0 * ((SELECT population FROM population_years AS p2
WHERE p2.year = p1.year + 1
AND p2.country = p1.country)
- population) / population AS year_on_year_growth,
year
FROM population_years AS p1
WHERE year = 2005
AND year_on_year_growth IS NOT NULL
ORDER BY year_on_year_growth DESC
LIMIT 1;
Using python, I can solve the problem using the first query saved as yoy_query if I do this:
yoy_result = c.execute(yoy_query).fetchall()
sorted([record for record in yoy_result if record[1] == max([row[1] for row in yoy_result if row[2] == record[2]])],key=lambda x:x[2])
and I get the desired result:
[('Montserrat', 7.34177215189872, 2000), ('Montserrat', 13.4433962264151, 2001), ('Afghanistan', 5.803891762260126, 2002), ('Montserrat', 10.467706013363028, 2003), ('Liberia', 4.7976709085316545, 2004), ('Jordan', 7.088496587486171, 2005), ('Jordan', 6.764378108744186, 2006), ('Montserrat', 12.638580931263864, 2007), ('Liberia', 4.157111008408977, 2008), ('Niger', 3.737166190281749, 2009)]
But I can't think of a way to do this using SQL. Any ideas? I think the reason it seems much easier in python is because I'm able to save the intermediate result, then run a second calculation on that.
You can do it with window functions LAG() and RANK():
select country, year_on_year_growth, year
from (
select *, rank() over (partition by year order by year_on_year_growth desc) as rnk
from (
select *,
100.0 * (population / lag(population) over (partition by country order by year) - 1) as year_on_year_growth
from population_years
)
)
The expression:
lag(population) over (partition by country order by year)
returns the country's population the previous year (assuming that there are no gaps between the years).
So I calculated the growth rate as:
((current year's population) / (previous year's population)) - 1
I guess the simplest thing to do would actually be to just use a view as follows:
CREATE VIEW yoy_growth
AS
SELECT country,
100.0 * ((SELECT population FROM population_years AS p2
WHERE p2.year = p1.year + 1
AND p2.country = p1.country)
- population) / population AS year_on_year_growth,
year
FROM population_years AS p1
WHERE year_on_year_growth IS NOT NULL
ORDER BY year_on_year_growth;
SELECT * FROM yoy_growth AS y1
WHERE year_on_year_growth = (
SELECT MAX(year_on_year_growth)
FROM yoy_growth AS y2
WHERE y1.year = y2.year
)
ORDER BY year;
That way I get the result I want, although the query does seem a little slow.

Cohort/ Retention query in BigQuery using Google Analytics exported data

I need help formulating a cohort/retention query
I am trying to build a query to look at visitors who performed ActionX on their first visit (in the time frame) and then how many days later they returned to perform Action X again
The output I (eventually) need looks like this...
The table I am dealing with is an export of Google Analytics to BigQuery
If anyone could help me with this or anyone who has written a query similar that I can manipulate?
Thanks
Just to give you simple idea / direction
Below is for BigQuery Standard SQL
#standardSQL
SELECT
Date_of_action_first_taken,
ROUND(100 * later_1_day / Visits) AS later_1_day,
ROUND(100 * later_2_days / Visits) AS later_2_days,
ROUND(100 * later_3_days / Visits) AS later_3_days
FROM `OutputFromQuery`
You can test it with below dummy data from your question
#standardSQL
WITH `OutputFromQuery` AS (
SELECT '01.07.17' AS Date_of_action_first_taken, 1000 AS Visits, 800 AS later_1_day, 400 AS later_2_days, 300 AS later_3_days UNION ALL
SELECT '02.07.17', 1000, 860, 780, 860 UNION ALL
SELECT '29.07.17', 1000, 780, 120, 0 UNION ALL
SELECT '30.07.17', 1000, 710, 0, 0
)
SELECT
Date_of_action_first_taken,
ROUND(100 * later_1_day / Visits) AS later_1_day,
ROUND(100 * later_2_days / Visits) AS later_2_days,
ROUND(100 * later_3_days / Visits) AS later_3_days
FROM `OutputFromQuery`
The OutputFromQuery data is as below:
Date_of_action_first_taken Visits later_1_day later_2_days later_3_days
01.07.17 1000 800 400 300
02.07.17 1000 860 780 860
29.07.17 1000 780 120 0
30.07.17 1000 710 0 0
and the final output is:
Date_of_action_first_taken later_1_day later_2_days later_3_days
01.07.17 80.0 40.0 30.0
02.07.17 90.0 78.0 86.0
29.07.17 80.0 12.0 0.0
30.07.17 70.0 0.0 0.0
I found this query on Turn Your App Data into Answers with Firebase and BigQuery (Google I/O'19)
It should work :)
#standardSQL
###################################################
# Part 1: Cohort of New Users Starting on DEC 24
###################################################
WITH
new_user_cohort AS (
SELECT DISTINCT
user_pseudo_id as new_user_id
FROM
`[your_project].[your_firebase_table].events_*`
WHERE
event_name = `[chosen_event] ` AND
#set the date from when starting cohort analysis
FORMAT_TIMESTAMP("%Y%m%d", TIMESTAMP_TRUNC(TIMESTAMP_MICROS(event_timestamp), DAY, "Etc/GMT+1")) = '20191224' AND
_TABLE_SUFFIX BETWEEN '20191224' AND '20191230'
),
num_new_users AS (
SELECT count(*) as num_users_in_cohort FROM new_user_cohort
),
#############################################
# Part 2: Engaged users from Dec 24 cohort
#############################################
engaged_users_by_day AS (
SELECT
FORMAT_TIMESTAMP("%Y%m%d", TIMESTAMP_TRUNC(TIMESTAMP_MICROS(event_timestamp), DAY, "Etc/GMT+1")) as event_day,
COUNT(DISTINCT user_pseudo_id) as num_engaged_users
FROM
`[your_project].[your_firebase_table].events_*`
INNER JOIN
new_user_cohort ON new_user_id = user_pseudo_id
WHERE
event_name = 'user_engagement' AND
_TABLE_SUFFIX BETWEEN '20191224' AND '20191230'
GROUP BY
event_day
)
####################################################################
# Part 3: Daily Retention = [Engaged Users / Total Users]
####################################################################
SELECT
event_day,
num_engaged_users,
num_users_in_cohort,
ROUND((num_engaged_users / num_users_in_cohort), 3) as retention_rate
FROM
engaged_users_by_day
CROSS JOIN
num_new_users
ORDER BY
event_day
So I think I may have cracked it... from this output I then would need to manipulate it (pivot table it) to make it look like the desired output.
Can anyone review this for me and let me know what you think?
`WITH
cohort_items AS (
SELECT
MIN( TIMESTAMP_TRUNC(TIMESTAMP_MICROS((visitStartTime*1000000 +
h.time*1000)), DAY) ) AS cohort_day, fullVisitorID
FROM
TABLE123 AS U,
UNNEST(hits) AS h
WHERE _TABLE_SUFFIX BETWEEN "20170701" AND "20170731"
AND 'ACTION TAKEN'
GROUP BY 2
),
user_activites AS (
SELECT
A.fullVisitorID,
DATE_DIFF(DATE(TIMESTAMP_TRUNC(TIMESTAMP_MICROS((visitStartTime*1000000 + h.time*1000)), DAY)), DATE(C.cohort_day), DAY) AS day_number
FROM `Table123` A
LEFT JOIN cohort_items C ON A.fullVisitorID = C.fullVisitorID,
UNNEST(hits) AS h
WHERE
A._TABLE_SUFFIX BETWEEN "20170701 AND "20170731"
AND 'ACTION TAKEN'
GROUP BY 1,2),
cohort_size AS (
SELECT
cohort_day,
count(1) as number_of_users
FROM
cohort_items
GROUP BY 1
ORDER BY 1
),
retention_table AS (
SELECT
C.cohort_day,
A.day_number,
COUNT(1) AS number_of_users
FROM
user_activites A
LEFT JOIN cohort_items C ON A.fullVisitorID = C.fullVisitorID
GROUP BY 1,2
)
SELECT
B.cohort_day,
S.number_of_users as total_users,
B.day_number,
B.number_of_users / S.number_of_users as percentage
FROM retention_table B
LEFT JOIN cohort_size S ON B.cohort_day = S.cohort_day
WHERE B.cohort_day IS NOT NULL
ORDER BY 1, 3
`
Thank you in advance!
If you use some techniques available in BigQuery, you can potentially solve this type of problem with very cost and performance effective solutions. As an example:
SELECT
init_date,
ARRAY((SELECT AS STRUCT days, freq, ROUND(freq * 100 / MAX(freq) OVER(), 2) FROM UNNEST(data) ORDER BY days)) data
FROM(
SELECT
init_date,
ARRAY_AGG(STRUCT(days, freq)) data
FROM(
SELECT
init_date,
data AS days,
COUNT(data) freq
FROM(
SELECT
init_date,
ARRAY(SELECT DATE_DIFF(PARSE_DATE("%Y%m%d", dts), PARSE_DATE("%Y%m%d", init_date), DAY) AS dt FROM UNNEST(dts) dts) data
FROM(
SELECT
MIN(date) init_date,
ARRAY_AGG(DISTINCT date) dts
FROM `Table123`
WHERE TRUE
AND EXISTS(SELECT 1 FROM UNNEST(hits) where eventinfo.eventCategory = 'recommendation') -- This is your 'ACTION TAKEN' filter
AND _TABLE_SUFFIX BETWEEN "20170724" AND "20170731"
GROUP BY fullvisitorid
)
),
UNNEST(data) data
GROUP BY init_date, days
)
GROUP BY init_date
)
I tested this query against our G.A data and selected customers who interacted with our recommendation system (as you can see in the filter selection WHERE EXISTS...). Example of result (omitted absolute values of freq for privacy reasons):
As you can see, at day 28th for instance, 8% of customers came back 1 day later and interacted with the system again.
I recommend you to play around with this query and see if it works well for you. It's simpler, cheaper, faster and hopefully easier to maintain.

Calculative cumulative returns using SQL

I currently generate a user's "monthly_return" between two months using the code below. How would I turn "monthly_return" into a cumulative "linked" return similar to the StackOverflow question linked below?
Similar question: Running cumulative return in sql
I tried:
exp(sum(log(1 + cumulative_return) over (order by date)) - 1)
But get the error:
PG::WrongObjectType: ERROR: OVER specified, but log is not a window function nor an aggregate function LINE 3: exp(sum(log(1 + cumulative_return) over (order by date)) - 1... ^ : SELECT portfolio_id, exp(sum(log(1 + cumulative_return) over (order by date)) - 1) FROM (SELECT date, portfolio_id, (value_cents * 0.01 - cash_flow_cents * 0.01) / (lag(value_cents * 0.01, 1) over ( ORDER BY portfolio_id, date)) - 1 AS cumulative_return FROM portfolio_balances WHERE portfolio_id = 16 ORDER BY portfolio_id, date) as return_data;
The input data would be:
1/1/2017: $100 value, $100 cash flow
1/2/2017: $100 value, $0 cash flow
1/3/2017: $100 value, $0 cash flow
1/4/2017: $200 value, $100 cash flow
The output would be:
1/1/2017: 0% cumulative return
1/2/2017: 0% cumulative return
1/3/2017: 0% cumulative return
1/4/2017: 0% cumulative return
My current code which shows monthly returns which are not linked (cumulative).
SELECT
date,
portfolio_id,
(value_cents * 0.01 - cash_flow_cents * 0.01) / (lag(value_cents * 0.01, 1) over ( ORDER BY portfolio_id, date)) - 1 AS monthly_return
FROM portfolio_balances
WHERE portfolio_id = 16
ORDER BY portfolio_id, date;
If you want a cumulative sum:
SELECT p.*,
SUM(monthly_return) OVER (PARTITION BY portfolio_id ORDER BY date) as running_monthly_return
FROM (SELECT date, portfolio_id,
(value_cents * 0.01 - cash_flow_cents * 0.01) / (lag(value_cents * 0.01, 1) over ( ORDER BY portfolio_id, date)) - 1 AS monthly_return
FROM portfolio_balances
WHERE portfolio_id = 16
) p
ORDER BY portfolio_id, date;
I don't see that this makes much sense, because you have the cumulative sum of a ratio, but that appears to be what you are asking for.