Only selecting certain rows based on calculated value - sql

I have a table similar to:
-----------------------
|Student| Month| GPA |
---------------------
| 1 | 1 | 70 |
| 1 | 2 | 70 |
| 1 | 3 | 75 |
| 2 | 1 | 80 |
| 2 | 2 | 72 |
| 2 | 3 | 72 |
What I want, is to calculate the GPA change, per month, per student - only selecting rows where an actual change was observed. My desired output is:
-----------------------
|Student| Month| GPA |
---------------------
| 1 | 3 | 1.071|
| 2 | 2 | 0.9 |
So far I have the following query (simplified, but similar):
SELECT
Student,
Month,
GPA,
Change =
CASE
WHEN LAG(GPA, 1) OVER (ORDER BY Student, Month) !> 0
THEN 1
WHEN Student != LAG(Student, 1) OVER (ORDER BY Student, Month)
THEN 1
ELSE GPA/LAG(GPA, 1) OVER (ORDER BY Student, Month)
FROM students
ORDER BY Student, Month;
The output I receive from this is:
---------------------------------
|Student| Month| GPA | Change|
---------------------------------
| 1 | 1 | 70 | 1 |
| 1 | 2 | 70 | 1 |
| 1 | 3 | 75 | 1.071|
| 2 | 1 | 80 | 1 |
| 2 | 2 | 72 | 0.9 |
| 2 | 3 | 72 | 1 |
I believe a sub-query is needed to only select rows where Change != 1, but I'm unsure how to implement this correctly here.

You seem to want:
select s.*,
gpa / nullif(prev_gpa, 0) -- I suppose a 0 gpa is possible
from (select s.*,
lag(gpa) over (partition by student order by month) as prev_gpa
from s
) s
where prev_gpa is not null and prev_gpa <> gpa;

Very similar to Gordon's, but takes advantage of the optional 3rd parameter to LAG to use the current row's GPA when there is no previous (to yield no change).
SELECT *
FROM (
SELECT Student, Month, GPA
, Change = GPA / LAG(GPA, 1, GPA) OVER (PARTITION BY Student ORDER BY Month)
FROM students
) AS subQ
WHERE Change != 1.0
ORDER BY Student, Month
;
Edit: I'm not sure what the minimum GPA value could be, but it is best to be aware that a previous GPA of 0 would cause a divide by zero error.

Related

How to order id's using subtotal from another column in PostgreSQL

I have a table returned by a select query. Example :
id | day | count |
-- | ------ | ----- |
1 | 71 | 3 |
1 | 70 | 2 |
1 |Subtotal| 5 |
2 | 70 | 5 |
2 | 71 | 2 |
2 | 69 | 2 |
2 |Subtotal| 9 |
3 | 69 | 1 |
3 | 70 | 1 |
3 |Subtotal| 2 |
the day column contains text values (so varchar)
subtotal is the sum of the counts for an id (e.g. id 2 has subtotal of 5 + 2 + 2 = 9)
I now want to order this table so the id’s with the lowest subtotal count come first, and then ordered by day with subtotal at the end (like before)
Expected output:
id | day | count |
-- | ------ | ----- |
3 | 69 | 1 |
3 | 70 | 1 |
3 |Subtotal| 2 |
1 | 70 | 2 |
1 | 71 | 3 |
1 |Subtotal| 5 |
2 | 69 | 2 |
2 | 70 | 5 |
2 | 71 | 2 |
2 |Subtotal| 9 |
I can't figure out how to order based on subtotal only ?
i've tried multiple order by (eg: ORDER BY day = 'Subtotal' & a mix of others) and using window functions but none are helping. Cheers !
Not sure if it's directly applicable to your source query (since you haven't included it) however the ordering you require on the sample data can be done with:
order by Max(count) over(partition by id), day
Note - ordering by day works with your sample data but as it's a string it will not honour numeric ordering, this should really be ordered by the source of the numerical value - again since we don't have your actual query I can't suggest anything more applicable but I'm sure you can substitute the correct column/expression.
I just crated table with 3 columns and tried to reproduce your expected result. I assume that there might be a problem ordering by day, subtotal would be always on top, but it seems as working solution.
create table test
(
id int,
day varchar(15),
count int
)
insert into test
values
(1,'71',3),
(1,'70',2),
(2,'70',5),
(2,'71',2),
(2,'69',2),
(3,'69',1),
(3,'70',1)
select id, day, count
from
(
select id, day, sum(count) as count
from test
group by id, rollup(day)
) as t
order by Max(count) over(partition by id), day

Combine PARTITION BY and GROUP BY

I have a (mssql) table like this:
+----+----------+---------+--------+--------+
| id | username | date | scoreA | scoreB |
+----+----------+---------+--------+--------+
| 1 | jim | 01/2020 | 100 | 0 |
| 2 | max | 01/2020 | 0 | 200 |
| 3 | jim | 01/2020 | 0 | 150 |
| 4 | max | 02/2020 | 150 | 0 |
| 5 | jim | 02/2020 | 0 | 300 |
| 6 | lee | 02/2020 | 100 | 0 |
| 7 | max | 02/2020 | 0 | 200 |
+----+----------+---------+--------+--------+
What I need is to get the best "combined" score per date. (With "combined" score I mean the best scores per user and per date summarized)
The result should look like this:
+----------+---------+--------------------------------------------+
| username | date | combined_score (max(scoreA) + max(scoreB)) |
+----------+---------+--------------------------------------------+
| jim | 01/2020 | 250 |
| max | 02/2020 | 350 |
+----------+---------+--------------------------------------------+
I came this far:
I can group the scores by user like this:
SELECT
username, (max(scoreA) + max(scoreB)) AS combined_score,
FROM score_table
GROUP BY username
ORDER BY combined_score DESC
And I can get the best score per date with PARTITION BY like this:
SELECT *
FROM
(SELECT t.*, row_number() OVER (PARTITION BY date ORDER BY scoreA DESC) rn
FROM score_table t) as tmp
WHERE tmp.rn = 1
ORDER BY date
Is there a proper way to combine these statements and get the result I need? Thank you!
Btw. Don't care about possible ties!
You can combine window functions and aggregation functions like this:
SELECT s.*
FROM (SELECT username, date, (max(scoreA) + max(scoreB)) AS combined_score,
ROW_NUMBER() OVER (PARTITION BY date ORDER BY max(scoreA) + max(scoreB) DESC) as seqnum
FROM score_table
GROUP BY username, date
) s
ORDER BY combined_score DESC;
Note that date needs to be part of the aggregation.

Aggregation within a certain time

I need to create a list of the 2 rating hotels in the UK that have increased their rating by at least 3 points from the beginning.
Month | Hotel | Rating | Region |
---------------------------------------
01-Jan-19 | A | 1 | US |
01-Feb-19 | B | 2 | UK |
01-Mar-19 | C | 3 | EU |
01-Apr-19 | A | 1 | US |
01-May-19 | B | 4 | UK |
01-Jun-19 | C | 3 | EU |
01-Jul-19 | A | 1 | US |
01-Aug-19 | B | 5 | UK |
01-Sep-19 | C | 4 | EU |
Like this, the query must produce Hotel B only.
It sounds like you want the first and last entries. One method uses conditional aggregation. I am going to assume that month is really a date or number and not a string:
select t.hotel
from (select t.*,
row_number() over (partition by hotel order by month asc) as seqnum_asc,
row_number() over (partition by hotel order by month desc) as seqnum_desc
from t
) t
group by t.hotel
having max(rating) filter (where seqnum_asc = 1) >= max(rating) filter (where seqnum_desc = 1) + 3;
This also works
I have tried it
Select "Hotel"
From T
Where "Region" = 'UK'
Group by "Hotel"
Having
Min ("Rating") = 2
And
Max ("Rating") >= 5
The Link to test:
https://www.db-fiddle.com/f/6TVgrC5WRjqdyPwdGSvWGN/8

Oracle SQL newbie - Add new column that gets occurrence and computations

This post is enhanced version of my previous post here.
Please Note: This is not duplicate post or thread.
I have 3 tables:
1. REQUIRED_AUDITS (Independent table)
2. SCORE_ENTRY (SCORE_ENTRY is One to Many relationship with ERROR table)
3. ERROR
Below are the dummy data and table structure:
REQUIRED_AUDITS TABLE
+-------+------+----------+---------------+-----------------+------------+----------------+---------+
| ID | VP | Director | Manager | Employee | Req_audits | Audit_eligible | Quarter |
+-------+------+----------+---------------+-----------------+------------+----------------+---------+
| 10001 | John | King | susan#com.com | jake#com.com | 2 | Y | FY18Q1 |
| 10002 | John | King | susan#com.com | beth#com.com | 4 | Y | FY18Q1 |
| 10003 | John | Maria | tony#com.com | david#com.com | 6 | N | FY18Q1 |
| 10004 | John | Maria | adam#com.com | william#com.com | 3 | Y | FY18Q1 |
| 10005 | John | Smith | alex#com.com | rose#com.com | 6 | Y | FY18Q1 |
+-------+------+----------+---------------+-----------------+------------+----------------+---------+
SCORE_ENTRY TABLE
+----------------+------+----------+---------------+-----------------+-------+---------+
| SCORE_ENTRY_ID | VP | Director | Manager | Employee | Score | Quarter |
+----------------+------+----------+---------------+-----------------+-------+---------+
| 1 | John | King | susan#com.com | jake#com.com | 100 | FY18Q1 |
| 2 | John | King | susan#com.com | jake#com.com | 90 | FY18Q1 |
| 3 | John | King | susan#com.com | beth#com.com | 98.45 | FY18Q1 |
| 4 | John | King | susan#com.com | beth#com.com | 95 | FY18Q1 |
| 5 | John | King | susan#com.com | beth#com.com | 100 | FY18Q1 |
| 6 | John | King | susan#com.com | beth#com.com | 100 | FY18Q1 |
| 7 | John | Maria | adam#com.com | william#com.com | 99 | FY18Q1 |
| 8 | John | Maria | adam#com.com | william#com.com | 98.1 | FY18Q1 |
| 9 | John | Smith | alex#com.com | rose#com.com | 96 | FY18Q1 |
| 10 | John | Smith | alex#com.com | rose#com.com | 100 | FY18Q1 |
+----------------+------+----------+---------------+-----------------+-------+---------+
ERROR TABLE
+----------+-----------------------------+----------------+
| ERROR_ID | ERROR | SCORE_ENTRY_ID |
+----------+-----------------------------+----------------+
| 10 | Words Missing | 2 |
| 11 | Incorrect document attached | 2 |
| 12 | No results | 3 |
| 13 | Value incorrect | 4 |
| 14 | Words Missing | 4 |
| 15 | No files attached | 4 |
| 16 | Document read error | 7 |
| 17 | Garbage text | 8 |
| 18 | No results | 8 |
| 19 | Value incorrect | 9 |
| 20 | No files attached | 9 |
+----------+-----------------------------+----------------+
I have query that give below output:
+----------+---------------+------------------+------------------+------------------+
| | | Director Summary | | |
+----------+---------------+------------------+------------------+------------------+
| Director | Manager | Audits Required | Audits Performed | Percent Complete |
| King | susan#com.com | 6 | 6 | 100% |
| Maria | adam#com.com | 3 | 2 | 67% |
| Smith | alex#com.com | 6 | 2 | 33% |
+----------+---------------+------------------+------------------+------------------+
Now I would like to add column where I want the number of scores that have an error associated with them divided by total count of scores:
It's not total count of errors divided by count of scores. Instead its count of each occurrence of error and divide by count of score. Please find below example:
Considering
Director:King
Manager:susan#com.com
From SCORE_ENTRY TABLE and ERROR table,
King has 6 entries in SCORE_ENTRY TABLE
6 entries in ERROR TABLE
Instead of 6 entries in ERROR TABLE, I would like to have occurrence of error ie., 3 errors.
Formula to calculate Quality:
Quality = 1 - (sum of error occurrence / total score)*100
For King:
Quality = 1 - (3/6)*100
Quality = 50
Please Note: It's not 1 - (6/6)*100
For Maria:
Quality = 1 - (2/2)*100
Quality = 0
Below is the new output I need with new column called Quality:
+----------+---------------+---------+------------------+------------------+------------------+
| | | | Director Summary | | |
+----------+---------------+---------+------------------+------------------+------------------+
| Director | Manager | Quality | Audits Required | Audits Performed | Percent Complete |
| King | susan#com.com | 50% | 6 | 6 | 100% |
| Maria | adam#com.com | 0% | 3 | 2 | 67% |
| Smith | alex#com.com | 50% | 6 | 2 | 33% |
+----------+---------------+---------+------------------+------------------+------------------+
Below is the query am having (Thanks to #Kaushik Nayak, #APC and others) and need to add new column to this query:
WITH aud(manager_email, director, quarter, total_audits_required)
AS (SELECT manager_email,
director,
quarter,
SUM (CASE
WHEN audit_eligible = 'Y' THEN required_audits
END)
FROM required_audits
GROUP BY manager_email,
director,
quarter), --Total_audits
scores(manager_email, director, quarter, audits_completed)
AS (SELECT manager_email,
director,
quarter,
Count (score)
FROM oq_score_entry s
GROUP BY manager_email,
director,
quarter) --Audits_Performed
SELECT a.director,
a.manager_email manager,
a.total_audits_required,
s.audits_completed,
Round(( ( s.audits_completed ) / ( a.total_audits_required ) * 100 ), 2)
percentage_complete,
a.quarter
FROM aud a
left outer join scores s
ON a.manager_email = s.manager_email
WHERE ( :P4_MANAGER_EMAIL = a.manager_email
OR :P4_MANAGER_EMAIL IS NULL )
AND ( :P4_DIRECTOR = a.director
OR :P4_DIRECTOR IS NULL )
AND ( :P4_QUARTER = a.quarter
OR :P4_QUARTER IS NULL )
ORDER BY a.total_audits_required DESC nulls last
Please let me know if its confusing or need more details. Am open for any suggestions and feedback.
Appreciate any help.
Thanks,
Richa
Update:
Well my first guess has been wrong, and I hope now I'm getting it right.
According to your and shawnt00's comments, you need to compute the count of score entries that have corresponding entries in ERROR table, and use it in quality calculation.
This count you get with the expression:
COUNT ((select max(1) from "ERROR" o where o.score_entry_id=s.score_entry_id)) AS error_occurences
max(1) returns 1 when there is an entry in "ERROR" and NULL otherwise. COUNT skips nulls.
I hope this is clear.
Quality is computed as
(1 - error_occurences/audits_completed)*100%
Below is the full script, where manager_email renamed to manager and oq_score_entry renamed to score_entry.
This is in accordance with your scheme. Also I removed unnecessary WITH column mapping, it just complicates things in this case.
WITH aud AS (SELECT manager, director, quarter, SUM (CASE
WHEN audit_eligible = 'Y' THEN req_audits
END) total_audits_required
FROM required_audits
GROUP BY manager, director, quarter), --Total_audits
scores AS (
SELECT manager, director, quarter,
Count (score) audits_completed,
COUNT ((select max(1) from "ERROR" o where o.score_entry_id=s.score_entry_id)
) error_occurences -- ** Added **
FROM score_entry s
GROUP BY manager, director, quarter
) --Audits_Performed
SELECT a.director,
a.manager manager,
a.total_audits_required,
s.audits_completed,
Round(( 1 - ( s.error_occurences ) / ( s.audits_completed )) * 100, 2), -- ** Added **
Round(( ( s.audits_completed ) / ( a.total_audits_required ) * 100 ), 2)
percentage_complete,
a.quarter
FROM aud a
left outer join scores s ON a.manager = s.manager
WHERE ( :P4_manager = a.manager
OR :P4_manager IS NULL )
AND ( :P4_DIRECTOR = a.director
OR :P4_DIRECTOR IS NULL )
AND ( :P4_QUARTER = a.quarter
OR :P4_QUARTER IS NULL )
ORDER BY a.total_audits_required DESC nulls last
About total_errors:
To add this column you can either use a technique similar to the one used before in scores:
scores AS (
SELECT manager, director, quarter,
count (score) audits_completed,
count ((select max(1) from "ERROR" o where o.score_entry_id=s.score_entry_id )
) error_occurences,
sum ( ( SELECT count(*) from "ERROR" o where o.score_entry_id=s.score_entry_id )
) total_errors -- summing error counts for matched score_entry_ids
FROM score_entry s
GROUP BY manager, director, quarter
)
Or you can rewrite the scores CTE joining score_entry and error, and that would require using DISTINCT on score_entry fields to avoid duplication of rows:
scores AS (
SELECT manager, director, quarter,
count(DISTINCT s.score_entry_id) audits_completed,
count(DISTINCT e.score_entry_id ) error_occurences, -- counting distinct score_entry_ids present in Error
count(e.score_entry_id) total_errors -- counting total rows in Error
FROM score_entry s
LEFT JOIN "ERROR" e ON s.score_entry_id=e.score_entry_id
GROUP BY manager, director, quarter
)
The latter approach is a bit less maintable, since it requires to be careful about unwanted duplication.
Yet another (and may be the most proper) way is to make a separate(third) CTE, but I don't think the query is complex enough to warrant this.
Original answer:
I might be wrong, but it seems to me that by "count of each occurrence of error" you are trying to describe COUNT(DISTINCT expr). That is to count unique occurences of error for each (manager_email, director, quarter).
If so, change the query a bit:
WITH aud(manager_email, director, quarter, total_audits_required)
AS (SELECT manager_email,
director,
quarter,
SUM (CASE
WHEN audit_eligible = 'Y' THEN required_audits
END)
FROM required_audits
GROUP BY manager_email,
director,
quarter), --Total_audits
scores(manager_email, director, quarter, audits_completed, distinct_errors)
AS (SELECT manager_email,
director,
quarter,
Count (score),
COUNT (DISTINCT o.error_id) -- ** Added **
FROM oq_score_entry s join error o on o.score_entry_id=s.score_entry_id
GROUP BY manager_email,
director,
quarter) --Audits_Performed
SELECT a.director,
a.manager_email manager,
a.total_audits_required,
s.audits_completed,
Round(( ( s.distinct_errors ) / ( s.audits_completed ) * 100 ), 2) quality, -- ** Added **
Round(( ( s.audits_completed ) / ( a.total_audits_required ) * 100 ), 2)
percentage_complete,
a.quarter
FROM aud a
left outer join scores s
ON a.manager_email = s.manager_email
WHERE ( :P4_MANAGER_EMAIL = a.manager_email
OR :P4_MANAGER_EMAIL IS NULL )
AND ( :P4_DIRECTOR = a.director
OR :P4_DIRECTOR IS NULL )
AND ( :P4_QUARTER = a.quarter
OR :P4_QUARTER IS NULL )
ORDER BY a.total_audits_required DESC nulls last
The join on your main query will need to include director and quarter once you have more data.
I suppose the easiest way to fix this is to follow the structure you've got and add another table expression joining it to the rest of your results in the same way as the original two.
select manager_email, director, quarter,
100.0 - 100.0 * count (distinct e.score_entry_id) / count (*) as quality
from score_entry se left outer join error e
on e.score_entry_id = se.score_entry_id
group by manager_email, director, quarter
What would have made most of your explanation unnecessary is to have simply said that you want the number of scores that have an error associated with them. It was difficult to draw that out from the information you provided.

Sum length of overlapping intervals

I've got a table in a Redshift database that contains intervals which are grouped and that potentially overlap, like so:
| interval_id | l | u | group |
| ----------- | -- | -- | ----- |
| 1 | 1 | 10 | A |
| 2 | 2 | 5 | A |
| 3 | 5 | 15 | A |
| 4 | 26 | 30 | B |
| 5 | 28 | 35 | B |
| 6 | 30 | 31 | B |
| 7 | 44 | 45 | B |
| 8 | 56 | 58 | C |
What I would like to do is to determine the length of the union of the intervals within group. That is, for each interval take u - l, sum over all group members and then subtract off the length of the overlaps between the intervals.
Desired result:
| group | length |
| ----- | ------ |
| A | 14 |
| B | 10 |
| C | 2 |
This question has been asked before, alas it seems that all of the solutions in that thread use features that Redshift doesn't support.
This is not difficult but requires multiple steps. The key is to define the "islands" within each group and then aggregate over those. Lots of subquerys, aggregations, and window functions.
select groupId, sum(ul)
from (select groupId, (max(u) - min(l) + 1) as ul
from (select t.*,
sum(case when prev_max_u < l then 1 else 0 end) over (order by l) as grp
from (select t.*,
max(u) over (order by l rows between unbounded preceding and 1 preceding) as prev_max_u
from t
) t
) t
group by groupid, grp
) g
group by groupId;
The idea is to determine if there is an overlap at the beginning of each record. For this purpose, it uses a cumulative max function on all preceding records. Then, it determines if there is an overlap by comparing the previous max with the current l -- a cumulative sum of overlaps defines a group.
The rest is just aggregation. And more aggregation.