Rolling up rows and creating a new row per roll up - sql

I have a table like this:
DepartmentName | SubDivisionName | Importance
Security | Cyber | 1
Security | Airlines | 2
Security | Banks | 3
Health | Children | 4
Health | Elderly | 5
Housing | Housing | 6
Misc | | 7
I want to create a new table ( a temp table ) based off of that to look like this below. Notice that whenever we GROUP BY DepartmentName and number of its GROUP members is bigger than one, it means that DepartmentName has some SubDivisions so we want to insert a new row for **that **and Importance value would get updated accordingly.
DepartmentName | SubDivisionName | Importance
Security | | 1
Security | Cyber | 2
Security | Airlines | 3
Security | Banks | 4
Health | | 5
Health | Children | 6
Health | Elderly | 7
Housing | Housing | 8
Misc | | 9
I tried some GROUP BY to find the ones that have more than one record but still had trouble inserting new rows and correctly updating the importance column.

Does the following provide your expected results?
You can union your existing data with rows that have >1 DepartmentNames and then use row_number to provide the new sequence:
with u as (
select DepartmentName, SubDivisionName, Importance
from t
union all
select DepartmentName, null, Min(Importance)
from t
group by DepartmentName
having Count(*) > 1
)
select DepartmentName, SubDivisionName,
Row_Number() over(order by Importance, SubDivisionName) as Importance
from u
order by Importance;
Demo Fiddle

Related

Trying to join a table of individuals to a table of couples, give a family ID and not time out the server

I have one table with fake individual tax records like so (one row per filer):
T1:
+-------+---------+---------+
| Person| Spouse | Income |
+-------+---------+---------+
| 1 | 2 | 34000 |
| 2 | 1 | 10000 |
| 3 | NULL | 97000 |
| 4 | 6 | 11000 |
| 5 | NULL | 25000 |
| 6 | 4 | 100000 |
+-------+---------+---------+
I have a second table which has tax 'families', a single individual or married couple (one line per tax 'family').
T1_Family:
+-------- -+-------+---------+
| Family_id| Person| Spouse |
+-------- -+-------+---------+
| 2 | 2 | 1 |
| 3 | 3 | NULL |
| 5 | 5 | NULL |
| 6 | 6 | 4 |
+------ ---+-------+---------+
Family = max(Person) within a couple
The idea of joining the two is for example, to sum the income of 2 people in one tax family (aggregate to the family level).
So, I've tried the following:
select *
into family_table
from
(
(select * from T1_family)a
join
(select * from T1)b
on a.family = b.person **or a.spouse = b.person**
)
where family_id is not null and person is not null
What I should get (and I do get when I select 1 random couple) is one line per individual where I can then group by family_id and sum income, pension contributions, etc. BUT SQL times out before the tables can be joined. The part in bold is what's slowing down the process but I'm not sure what else to do.
Is there an easier way to group by family?
It is simpler to put the data on one row:
select a.*, p.income as person_income, s.income as spouse_income
into family_table
from t1_family a left join
t1 p
on a.person = p.person lef tjoin
t1 s
on a.spouse = s.person;
Of course, you can add them together as well.

Oracle SQL newbie - Add new column that gets occurrence and computations

This post is enhanced version of my previous post here.
Please Note: This is not duplicate post or thread.
I have 3 tables:
1. REQUIRED_AUDITS (Independent table)
2. SCORE_ENTRY (SCORE_ENTRY is One to Many relationship with ERROR table)
3. ERROR
Below are the dummy data and table structure:
REQUIRED_AUDITS TABLE
+-------+------+----------+---------------+-----------------+------------+----------------+---------+
| ID | VP | Director | Manager | Employee | Req_audits | Audit_eligible | Quarter |
+-------+------+----------+---------------+-----------------+------------+----------------+---------+
| 10001 | John | King | susan#com.com | jake#com.com | 2 | Y | FY18Q1 |
| 10002 | John | King | susan#com.com | beth#com.com | 4 | Y | FY18Q1 |
| 10003 | John | Maria | tony#com.com | david#com.com | 6 | N | FY18Q1 |
| 10004 | John | Maria | adam#com.com | william#com.com | 3 | Y | FY18Q1 |
| 10005 | John | Smith | alex#com.com | rose#com.com | 6 | Y | FY18Q1 |
+-------+------+----------+---------------+-----------------+------------+----------------+---------+
SCORE_ENTRY TABLE
+----------------+------+----------+---------------+-----------------+-------+---------+
| SCORE_ENTRY_ID | VP | Director | Manager | Employee | Score | Quarter |
+----------------+------+----------+---------------+-----------------+-------+---------+
| 1 | John | King | susan#com.com | jake#com.com | 100 | FY18Q1 |
| 2 | John | King | susan#com.com | jake#com.com | 90 | FY18Q1 |
| 3 | John | King | susan#com.com | beth#com.com | 98.45 | FY18Q1 |
| 4 | John | King | susan#com.com | beth#com.com | 95 | FY18Q1 |
| 5 | John | King | susan#com.com | beth#com.com | 100 | FY18Q1 |
| 6 | John | King | susan#com.com | beth#com.com | 100 | FY18Q1 |
| 7 | John | Maria | adam#com.com | william#com.com | 99 | FY18Q1 |
| 8 | John | Maria | adam#com.com | william#com.com | 98.1 | FY18Q1 |
| 9 | John | Smith | alex#com.com | rose#com.com | 96 | FY18Q1 |
| 10 | John | Smith | alex#com.com | rose#com.com | 100 | FY18Q1 |
+----------------+------+----------+---------------+-----------------+-------+---------+
ERROR TABLE
+----------+-----------------------------+----------------+
| ERROR_ID | ERROR | SCORE_ENTRY_ID |
+----------+-----------------------------+----------------+
| 10 | Words Missing | 2 |
| 11 | Incorrect document attached | 2 |
| 12 | No results | 3 |
| 13 | Value incorrect | 4 |
| 14 | Words Missing | 4 |
| 15 | No files attached | 4 |
| 16 | Document read error | 7 |
| 17 | Garbage text | 8 |
| 18 | No results | 8 |
| 19 | Value incorrect | 9 |
| 20 | No files attached | 9 |
+----------+-----------------------------+----------------+
I have query that give below output:
+----------+---------------+------------------+------------------+------------------+
| | | Director Summary | | |
+----------+---------------+------------------+------------------+------------------+
| Director | Manager | Audits Required | Audits Performed | Percent Complete |
| King | susan#com.com | 6 | 6 | 100% |
| Maria | adam#com.com | 3 | 2 | 67% |
| Smith | alex#com.com | 6 | 2 | 33% |
+----------+---------------+------------------+------------------+------------------+
Now I would like to add column where I want the number of scores that have an error associated with them divided by total count of scores:
It's not total count of errors divided by count of scores. Instead its count of each occurrence of error and divide by count of score. Please find below example:
Considering
Director:King
Manager:susan#com.com
From SCORE_ENTRY TABLE and ERROR table,
King has 6 entries in SCORE_ENTRY TABLE
6 entries in ERROR TABLE
Instead of 6 entries in ERROR TABLE, I would like to have occurrence of error ie., 3 errors.
Formula to calculate Quality:
Quality = 1 - (sum of error occurrence / total score)*100
For King:
Quality = 1 - (3/6)*100
Quality = 50
Please Note: It's not 1 - (6/6)*100
For Maria:
Quality = 1 - (2/2)*100
Quality = 0
Below is the new output I need with new column called Quality:
+----------+---------------+---------+------------------+------------------+------------------+
| | | | Director Summary | | |
+----------+---------------+---------+------------------+------------------+------------------+
| Director | Manager | Quality | Audits Required | Audits Performed | Percent Complete |
| King | susan#com.com | 50% | 6 | 6 | 100% |
| Maria | adam#com.com | 0% | 3 | 2 | 67% |
| Smith | alex#com.com | 50% | 6 | 2 | 33% |
+----------+---------------+---------+------------------+------------------+------------------+
Below is the query am having (Thanks to #Kaushik Nayak, #APC and others) and need to add new column to this query:
WITH aud(manager_email, director, quarter, total_audits_required)
AS (SELECT manager_email,
director,
quarter,
SUM (CASE
WHEN audit_eligible = 'Y' THEN required_audits
END)
FROM required_audits
GROUP BY manager_email,
director,
quarter), --Total_audits
scores(manager_email, director, quarter, audits_completed)
AS (SELECT manager_email,
director,
quarter,
Count (score)
FROM oq_score_entry s
GROUP BY manager_email,
director,
quarter) --Audits_Performed
SELECT a.director,
a.manager_email manager,
a.total_audits_required,
s.audits_completed,
Round(( ( s.audits_completed ) / ( a.total_audits_required ) * 100 ), 2)
percentage_complete,
a.quarter
FROM aud a
left outer join scores s
ON a.manager_email = s.manager_email
WHERE ( :P4_MANAGER_EMAIL = a.manager_email
OR :P4_MANAGER_EMAIL IS NULL )
AND ( :P4_DIRECTOR = a.director
OR :P4_DIRECTOR IS NULL )
AND ( :P4_QUARTER = a.quarter
OR :P4_QUARTER IS NULL )
ORDER BY a.total_audits_required DESC nulls last
Please let me know if its confusing or need more details. Am open for any suggestions and feedback.
Appreciate any help.
Thanks,
Richa
Update:
Well my first guess has been wrong, and I hope now I'm getting it right.
According to your and shawnt00's comments, you need to compute the count of score entries that have corresponding entries in ERROR table, and use it in quality calculation.
This count you get with the expression:
COUNT ((select max(1) from "ERROR" o where o.score_entry_id=s.score_entry_id)) AS error_occurences
max(1) returns 1 when there is an entry in "ERROR" and NULL otherwise. COUNT skips nulls.
I hope this is clear.
Quality is computed as
(1 - error_occurences/audits_completed)*100%
Below is the full script, where manager_email renamed to manager and oq_score_entry renamed to score_entry.
This is in accordance with your scheme. Also I removed unnecessary WITH column mapping, it just complicates things in this case.
WITH aud AS (SELECT manager, director, quarter, SUM (CASE
WHEN audit_eligible = 'Y' THEN req_audits
END) total_audits_required
FROM required_audits
GROUP BY manager, director, quarter), --Total_audits
scores AS (
SELECT manager, director, quarter,
Count (score) audits_completed,
COUNT ((select max(1) from "ERROR" o where o.score_entry_id=s.score_entry_id)
) error_occurences -- ** Added **
FROM score_entry s
GROUP BY manager, director, quarter
) --Audits_Performed
SELECT a.director,
a.manager manager,
a.total_audits_required,
s.audits_completed,
Round(( 1 - ( s.error_occurences ) / ( s.audits_completed )) * 100, 2), -- ** Added **
Round(( ( s.audits_completed ) / ( a.total_audits_required ) * 100 ), 2)
percentage_complete,
a.quarter
FROM aud a
left outer join scores s ON a.manager = s.manager
WHERE ( :P4_manager = a.manager
OR :P4_manager IS NULL )
AND ( :P4_DIRECTOR = a.director
OR :P4_DIRECTOR IS NULL )
AND ( :P4_QUARTER = a.quarter
OR :P4_QUARTER IS NULL )
ORDER BY a.total_audits_required DESC nulls last
About total_errors:
To add this column you can either use a technique similar to the one used before in scores:
scores AS (
SELECT manager, director, quarter,
count (score) audits_completed,
count ((select max(1) from "ERROR" o where o.score_entry_id=s.score_entry_id )
) error_occurences,
sum ( ( SELECT count(*) from "ERROR" o where o.score_entry_id=s.score_entry_id )
) total_errors -- summing error counts for matched score_entry_ids
FROM score_entry s
GROUP BY manager, director, quarter
)
Or you can rewrite the scores CTE joining score_entry and error, and that would require using DISTINCT on score_entry fields to avoid duplication of rows:
scores AS (
SELECT manager, director, quarter,
count(DISTINCT s.score_entry_id) audits_completed,
count(DISTINCT e.score_entry_id ) error_occurences, -- counting distinct score_entry_ids present in Error
count(e.score_entry_id) total_errors -- counting total rows in Error
FROM score_entry s
LEFT JOIN "ERROR" e ON s.score_entry_id=e.score_entry_id
GROUP BY manager, director, quarter
)
The latter approach is a bit less maintable, since it requires to be careful about unwanted duplication.
Yet another (and may be the most proper) way is to make a separate(third) CTE, but I don't think the query is complex enough to warrant this.
Original answer:
I might be wrong, but it seems to me that by "count of each occurrence of error" you are trying to describe COUNT(DISTINCT expr). That is to count unique occurences of error for each (manager_email, director, quarter).
If so, change the query a bit:
WITH aud(manager_email, director, quarter, total_audits_required)
AS (SELECT manager_email,
director,
quarter,
SUM (CASE
WHEN audit_eligible = 'Y' THEN required_audits
END)
FROM required_audits
GROUP BY manager_email,
director,
quarter), --Total_audits
scores(manager_email, director, quarter, audits_completed, distinct_errors)
AS (SELECT manager_email,
director,
quarter,
Count (score),
COUNT (DISTINCT o.error_id) -- ** Added **
FROM oq_score_entry s join error o on o.score_entry_id=s.score_entry_id
GROUP BY manager_email,
director,
quarter) --Audits_Performed
SELECT a.director,
a.manager_email manager,
a.total_audits_required,
s.audits_completed,
Round(( ( s.distinct_errors ) / ( s.audits_completed ) * 100 ), 2) quality, -- ** Added **
Round(( ( s.audits_completed ) / ( a.total_audits_required ) * 100 ), 2)
percentage_complete,
a.quarter
FROM aud a
left outer join scores s
ON a.manager_email = s.manager_email
WHERE ( :P4_MANAGER_EMAIL = a.manager_email
OR :P4_MANAGER_EMAIL IS NULL )
AND ( :P4_DIRECTOR = a.director
OR :P4_DIRECTOR IS NULL )
AND ( :P4_QUARTER = a.quarter
OR :P4_QUARTER IS NULL )
ORDER BY a.total_audits_required DESC nulls last
The join on your main query will need to include director and quarter once you have more data.
I suppose the easiest way to fix this is to follow the structure you've got and add another table expression joining it to the rest of your results in the same way as the original two.
select manager_email, director, quarter,
100.0 - 100.0 * count (distinct e.score_entry_id) / count (*) as quality
from score_entry se left outer join error e
on e.score_entry_id = se.score_entry_id
group by manager_email, director, quarter
What would have made most of your explanation unnecessary is to have simply said that you want the number of scores that have an error associated with them. It was difficult to draw that out from the information you provided.

Applying distinct in multiple columns in SQL server

I am trying to get distinct result by only one column( message). I tried
SELECT DISTINCT
[id], [message]
FROM Example table
GROUP BY [message]
But it doesn't show desired result.
Please let me know how can I do it?
Example table:
id | Message |
-- ------------
1 | mike |
2 | mike |
3 | star |
4 | star |
5 | star |
6 | sky |
7 | sky |
8 | sky |
Result table:
id | Message |
-- ------------
1 | mike |
3 | star |
6 | sky |
Group by the column you want to be unique and use an aggregate function on the other column. You want the lowest id for every message, so use MIN()
select min(id) as id,
message
from your_table
group by message

SQL Group by one column and decide which column to choose

Let's say I have data like this :
| id | code | name | number |
-----------------------------------------------
| 1 | 20 | A | 10 |
| 2 | 20 | B | 20 |
| 3 | 10 | C | 30 |
| 4 | 10 | D | 80 |
I would like to group rows by code value, but get real rows back (not some aggregate function).
I know that just
select *
from table
group by code
won't work because database don't know which row to return where code is the same.
So my question is how to tell database to select (for example) the lower number column so in my case
| id | code | name | number |
-----------------------------------------------
| 1 | 20 | A | 10 |
| 3 | 10 | C | 30 |
P.S.
I know how to do this by PARTITION but this is only allowed in Oracle databases and can't be created in JPA criteria builder (what is my ultimate goal).
Why You don't use code like this?
SELECT
id,
code,
name,
number
FROM
(
SELECT
*,
ROW_NUMBER() OVER (PARTITION BY code ORDER BY number ASC) AS RowNo
FROM table
) s
WHERE s.RowNo = 1
You can look at this site;
Data Partitioning

How do I add a column that marks one of each value?

I'm joining two tables in SQL. I currently have the SQL as:
SELECT table1.ProjectName AS "Project Name",
table1.ProjectCost AS "Project Cost",
table2.ExpenseName AS "Expense Name",
table2.ExpenseCost AS "Expense Cost"
FROM TABLE1 table1
INNER JOIN TABLE2 table2
ON table1.ProjectName = table2.ProjectName;
The result looks like:
Project Name | Project Cost | Expense Name | Expense Cost
------------------------------------------------------------
Project 1 | 123456 | Labor | 12365
Project 1 | 123456 | Rent | 120000
Project 2 | 8421 | (null) | (null)
Project 3 | 987654 | Paper | 1023
Project 3 | 987654 | Pens | 546
I want to add a row that marks one of each Project Name so that I can filter over it in Tableau and sum the projects costs.
EX:
Project Name | Project Cost | Expense Name | Expense Cost | Unique Value
----------------------------------------------------------------------------
Project 1 | 123456 | Labor | 12365 | Y
Project 1 | 123456 | Rent | 12000 | N
Project 2 | 8421 | (null) | (null) | Y
Project 3 | 987654 | Paper | 1023 | Y
Project 3 | 987654 | Pens | 546 | N
Project 3 | 987654 | Party | 9856 | N
I suposse you can use the lag function, I actually asked this not a long time ago, I can share my question, maybe it helps you. But instead of selecting the value you could create a table temporal table and populate that column based on the lag function:
SELECT Only one value per ID - SQL Server
You can use rank as well:
select t.*
, case when lag(project_name) over (partition by 1 order by project_name, rownum) = project_name then 'N' else 'Y' end n
, case when rank() over (partition by project_name order by rownum) > 1 then 'N' else 'Y' end n
from join_result