Converting a SQL subquery to a join for performance gains - sql

I have a subquery with an inner join, This join is meant to cut down the data size to a more manageable size, before extracting data via an unpivot which has a further join to only pull out relevant matches..
When i've looked at the execution plan, it seems like the outer select is being executed first and thus taking an inordinate amount of time to complete as it is processing the data for all gamers instead of the cut down cohort..
this is the query
SELECT
t2.Gamer_ID,
C.Feature_Code,
C.Feature_Name,
t2.CODE_DATE
FROM
(
SELECT
CAST(A.Gamer_ID
,[Identification_Code_Code_1]
,[Identification_Code_Code_2]
,[Identification_Code_Code_3]
,[Identification_Code_Code_4]
,[Identification_Code_Code_5]
,[Identification_Code_Code_6]
,[Identification_Code_Code_7]
,[Identification_Code_Code_8]
,[Identification_Code_Code_9]
,[Identification_Code_Code_10]
,CAST(Joining_date AS DATE) AS CODE_DATE
FROM Gamer_Characteristics A
INNER JOIN Gamer_Population P ON P.Gamer_ID = A.Gamer_ID --cuts down the number of gamers to the selected cohort
) s
unpivot (CODE for col in (
[Identification_Code_Code_1]
,[Identification_Code_Code_2]
,[Identification_Code_Code_3]
,[Identification_Code_Code_4]
,[Identification_Code_Code_5]
,[Identification_Code_Code_6]
,[Identification_Code_Code_7]
,[Identification_Code_Code_8]
,[Identification_Code_Code_9]
,[Identification_Code_Code_10])) as t2
INNER JOIN Gamer_feature_Code C ON C.CODE = LEFT(t2.CODE,C.CODE_LENGTH) --join to a dimension table to pull through characteristcs based on code and code length
WHERE
T2.CODE_DATE <= '2020-03-31'
GROUP BY t2.Gamer_ID,
C.Feature_Code,
C.Feature_Name,
t2.CODE_DATE
I have two questions.
1: can this be converted to use a join instead of subquery
2: Can i force the inner join in the subquery to take precedence over the inner join in the outer select?

Related

SQL - faster to filter by large table or small table

I have the below query which takes a while to run, since ir_sales_summary is ~ 2 billion rows:
select c.ChainIdentifier, s.SupplierIdentifier, s.SupplierName, we.Weekend,
sum(sales_units_cy) as TY_unitSales, sum(sales_cost_cy) as TY_costDollars, sum(sales_units_ret_cy) as TY_retailDollars,
sum(sales_units_ly) as LY_unitSales, sum(sales_cost_ly) as LY_costDollars, sum(sales_units_ret_ly) as LY_retailDollars
from ir_sales_summary i
left join Chains c
on c.ChainID = i.ChainID
inner join Suppliers s
on s.SupplierID = i.SupplierID
inner join tmpWeekend we
on we.SaleDate = i.saledate
where year(i.saledate) = '2017'
group by c.ChainIdentifier, s.SupplierIdentifier, s.SupplierName, we.Weekend
(Worth noting, it takes roughly 3 hours to run since it is using a view that brings in data from a legacy service)
I'm thinking there's a way to speed up the filtering, since I just need the data from 2017. Should I be filtering from the big table (i) or be filtering from the much smaller weekending table (which gives us just the week ending dates)?
Try this. This might help, joining a static table as first table in query onto a fact/dynamic table will impact query performance i believe.
SELECT c.ChainIdentifier
,s.SupplierIdentifier
,s.SupplierName
,i.Weekend
,sum(sales_units_cy) AS TY_unitSales
,sum(sales_cost_cy) AS TY_costDollars
,sum(sales_units_ret_cy) AS TY_retailDollars
,sum(sales_units_ly) AS LY_unitSales
,sum(sales_cost_ly) AS LY_costDollars
,sum(sales_units_ret_ly) AS LY_retailDollars
FROM Suppliers s
INNER JOIN (
SELECT we
,weeekend
,supplierid
,chainid
,sales_units_cy
,sales_cost_cy
,sales_units_ret_cy
,sales_units_ly
,sales_cost_ly
,sales_units_ret_ly
FROM ir_sales_summary i
INNER JOIN tmpWeekend we
ON we.SaleDate = i.saledate
WHERE year(i.saledate) = '2017'
) i
ON s.SupplierID = i.SupplierID
INNER JOIN Chains c
ON c.ChainID = i.ChainID
GROUP BY c.ChainIdentifier
,s.SupplierIdentifier
,s.SupplierName
,i.Weekend

Why does separate table perform significantly better than subquery?

I was trying to improve performance of a SQL query and tried few combinations.
Original Query
SELECT ALIAS_A.id1,
ALIAS_A.id2,
ALIAS_B.columnA,
ALIAS_C.columnB,
ALIAS_B.columnC
FROM db_A.table_A ALIAS_A
LEFT OUTER JOIN db_A.table_B ALIAS_B
ON ALIAS_A.id2 = ALIAS_B.id2
LEFT OUTER JOIN db_B.table_C ALIAS_C
ON ALIAS_B.columnA = ALIAS_C.item_num
LEFT OUTER JOIN db_A.table_D ALIAS_D
ON ALIAS_A.id2 = ALIAS_D.id2
INNER JOIN db_C.table_E ALIAS_E
ON Cast(ALIAS_A.column_date AS DATE) BETWEEN
ALIAS_E.column_startdate AND ALIAS_E.column_enddate
WHERE ALIAS_E.fiscalyear >= 2016
AND Cast(ALIAS_A.columnD AS DATE) BETWEEN
CURRENT_DATE - 5 AND CURRENT_DATE
The above query consumes nearly 400k impactCPU
Optimized Query 1
SELECT New_sub_table.id1,
New_sub_table.id2,
ALIAS_B.columnA,
ALIAS_C.columnB,
ALIAS_B.columnC
--changed part start--
FROM ( sel * from db_A.table_A ALIAS_A WHERE Cast(ALIAS_A.columnD AS DATE) BETWEEN
CURRENT_DATE - 5 AND CURRENT_DATE ) New_sub_table -- created a subquery
--changed part end--
LEFT OUTER JOIN db_A.table_B ALIAS_B
ON New_sub_table.id2 = ALIAS_B.id2
LEFT OUTER JOIN db_B.table_C ALIAS_C
ON ALIAS_B.columnA = ALIAS_C.item_num
LEFT OUTER JOIN db_A.table_D ALIAS_D
ON New_sub_table.id2 = ALIAS_D.id2
INNER JOIN db_C.table_E ALIAS_E
ON Cast(New_sub_table.column_date AS DATE) BETWEEN
ALIAS_E.column_startdate AND ALIAS_E.column_enddate
WHERE ALIAS_E.fiscalyear >= 2016
I thought to filter the data first and then do the joins. After I checked the performance stats. It was consuming nearly 390k CPU. Not much of a difference.
Optimized Query 2
SELECT ALIAS_A.id1,
ALIAS_A.id2,
ALIAS_B.columnA,
ALIAS_C.columnB,
ALIAS_B.columnC
--changed part start--
FROM INTERMEDIATE_DB.INTERMEDIATE_TABLE ALIAS_A --CREATED AN INTERMEDIATE TABLE
--changed part end--
LEFT OUTER JOIN db_A.table_B ALIAS_B
ON ALIAS_A.id2 = ALIAS_B.id2
LEFT OUTER JOIN db_B.table_C ALIAS_C
ON ALIAS_B.columnA = ALIAS_C.item_num
LEFT OUTER JOIN db_A.table_D ALIAS_D
ON ALIAS_A.id2 = ALIAS_D.id2
INNER JOIN db_C.table_E ALIAS_E
ON Cast(ALIAS_A.column_date AS DATE) BETWEEN
ALIAS_E.column_startdate AND ALIAS_E.column_enddate
WHERE ALIAS_E.fiscalyear >= 2016
MACRO for loading data into intermediate table
INSERT INTO INTERMEDIATE_DB.INTERMEDIATE_TABLE
sel * from db_A.table_A ALIAS_A WHERE Cast(ALIAS_A.columnD AS DATE) BETWEEN
CURRENT_DATE - 5 AND CURRENT_DATE
So what I did here was. I used an intermediate table instead of subquery. The intermediate table gets loaded via the macro first and then the select query runs. It now consumes only 50k impactCPU (for both Macro and Select query combined).
My question -
I am unable to reason why this is happening even though the logic behind both queries is same (or so I think it is). What would be the best practice if this is incorrect way ?
Your main problem is the Cast(ALIAS_A.columnD AS DATE). When you check Explains you will notice the optimizer has no confidence for this step, probably greatly overestimating the number of rows returned.
But when you materialize the Select the number of rows is better known and the order of joins changes.
You would probably get the same plan when you Collect Statistics on the Cast(ALIAS_A.columnD AS DATE), run DIAGNOSTIC HELPSTATS ON FOR SESSION; and Explain should show you this as recommended stats.

Using summed field in query twice with IIF statement - have I missed some syntax somewhere?

Having a bit of a problem with my code and can't figure out where I'm going wrong.
Essentially this query will return all employees for a given employer for a given year, along with the amount of their allowances, tax withheld, and gross payments they've received, and their Reportable Employer Superannuation Contributions (RESC).
RESC is any amounts (tblSuperPayments.PaymentAmount) paid over and above the superannuation guarantee, which is gross payments (sum of tblPayment.GrossPayment) * super rate (tblSuperRate.SuperRate). Otherwise, RESC is 0.
The data that I currently have in my tables is as follows
SUM(tblPayment.GrossPayment) = 1730
SUM(tblEmployee.TaxPayable) = 80
SUM(tblSuperPayments.PaymentAmount) = 500
tblSuperRate.SuperRate = 9.5%
Therefore my query should be returning an amount of RESC of 500-(1730*9.5%)= 335.65.
However, my query is currently returning $835.65 - meaning that (1730*9.5%) is returning -335.65.
I can't figure out where my logic is going wrong - it's probably something simple but I can't see it. I suspect that it might be summing tblPayment.GrossPayment twice (edited on request)
SELECT
tblEmployee.EmployeeID AS Id
SUM(tblPayment.Allowances) AS TotAllow,
SUM(tblPayment.TaxPayable) AS TotTax,
SUM(tblPayment.GrossPayment) AS TotGross,
(IIF
((SUM(tblSuperPayments.PaymentAmount)) <= (SUM(tblPayment.GrossPayment)*tblSuperRate.SuperRate),
0,
(SUM(tblSuperPayments.PaymentAmount) - (SUM(tblPayment.GrossPayment)*tblSuperRate.SuperRate))
)) As TotRESC
FROM
((tblEmployee
LEFT JOIN tblPayment // any reason for using left join over inner join
ON tblEmployee.EmployeeID = tblPayment.fk_EmployeeID)
LEFT JOIN tblSuperPayments // any reason for using left join over inner join
ON tblEmployee.EmployeeID = tblSuperPayments.fk_EmployeeID)
LEFT JOIN tblSuperRate // any reason for using left join over inner join
ON (tblPayment.PaymentDate <= tblSuperRate.TaxYearEnd) // these two conditions might be returning
AND (tblPayment.PaymentDate >= tblSuperRate.TaxYearStart) //two SuperRate rows because of using equals in both
WHERE
tblEmployee.fk_EmployerID = 1
GROUP BY
tblEmployee.EmployeeID,
tblSuperRate.SuperRate;
Looking at your query I recommend you to just group by primary key (EmployeeID) of tblEmployee and the use the result as a sub query and do a join later tham using many columns of tblEmployeein group by which might cause duplicate rows. I rewrote the query as I have mentioned above and added comments at places which might cause the error.
SELECT
tblEmployee.TFN,
tblEmployee.FirstName,
tblEmployee.MiddleName,
tblEmployee.LastName,
tblEmployee.DOB,
tblEmployee.MailingAddress,
tblEmployee.AddressLine2,
tblEmployee.City,
tblEmployee.fk_StateProvinceID,
tblEmployee.PostalCode,
temp.TotAllow,
temp.TotTax,
temp.TotGross,
temp.TotRESC
FROM
(SELECT
tblEmployee.EmployeeID AS Id
SUM(tblPayment.Allowances) AS TotAllow,
SUM(tblPayment.TaxPayable) AS TotTax,
SUM(tblPayment.GrossPayment) AS TotGross,
(IIF
((SUM(tblSuperPayments.PaymentAmount)) <= (SUM(tblPayment.GrossPayment)*tblSuperRate.SuperRate),
0,
(SUM(tblSuperPayments.PaymentAmount) - (SUM(tblPayment.GrossPayment)*tblSuperRate.SuperRate))
)) As TotRESC
FROM
((tblEmployee
LEFT JOIN tblPayment // any reason for using left join over inner join
ON tblEmployee.EmployeeID = tblPayment.fk_EmployeeID)
LEFT JOIN tblSuperPayments // any reason for using left join over inner join
ON tblEmployee.EmployeeID = tblSuperPayments.fk_EmployeeID)
LEFT JOIN tblSuperRate // any reason for using left join over inner join
ON (tblPayment.PaymentDate <= tblSuperRate.TaxYearEnd) // these two conditions might be returning
AND (tblPayment.PaymentDate >= tblSuperRate.TaxYearStart) //two SuperRate rows because of using equals in both
WHERE
tblEmployee.fk_EmployerID = 1
GROUP BY
tblEmployee.EmployeeID,
tblSuperRate.SuperRate) temp // Does a single employee have more than one superrate why grouping by it?
JOIN tblEmployee ON tblEmployee.EmployeeID=temp.Id;

Query faster with top attribute

Why is this query faster in SQL Server 2008 R2 (Version 10.50.2806.0)
SELECT
MAX(AtDate1),
MIN(AtDate2)
FROM
(
SELECT TOP 1000000000000
at.Date1 AS AtDate1,
at.Date2 AS AtDate2
FROM
dbo.tab1 a
INNER JOIN
dbo.tab2 at
ON
a.id = at.RootId
AND CAST(GETDATE() AS DATE) BETWEEN at.Date1 AND at.Date2
WHERE
a.Number = 223889
)B
then
SELECT
MAX(AtDate1),
MIN(AtDate2)
FROM
(
SELECT
at.Date1 AS AtDate1,
at.Date2 AS AtDate2
FROM
dbo.tab1 a
INNER JOIN
dbo.tab2 at
ON
a.id = at.RootId
AND CAST(GETDATE() AS DATE) BETWEEN at.Date1 AND at.Date2
WHERE
a.Number = 223889
)B
?
The second statement with the TOP attribute is six times faster.
The count(*) of the inner subquery is 9280 rows.
Can I use a HINT to declare that SQL Server optimiser make it right?
I see you've now posted the plans. Just luck of the draw.
Your actual query is a 16 table join.
SELECT max(atDate1) AS AtDate1,
min(atDate2) AS AtDate2,
max(vtDate1) AS vtDate1,
min(vtDate2) AS vtDate2,
max(bgtDate1) AS bgtDate1,
min(bgtDate2) AS bgtDate2,
max(lftDate1) AS lftDate1,
min(lftDate2) AS lftDate2,
max(lgtDate1) AS lgtDate1,
min(lgtDate2) AS lgtDate2,
max(bltDate1) AS bltDate1,
min(bltDate2) AS bltDate2
FROM (SELECT TOP 100000 at.Date1 AS atDate1,
at.Date2 AS atDate2,
vt.Date1 AS vtDate1,
vt.Date2 AS vtDate2,
bgt.Date1 AS bgtDate1,
bgt.Date2 AS bgtDate2,
lft.Date1 AS lftDate1,
lft.Date2 AS lftDate2,
lgt.Date1 AS lgtDate1,
lgt.Date2 AS lgtDate2,
blt.Date1 AS bltDate1,
blt.Date2 AS bltDate2
FROM dbo.Tab1 a
INNER JOIN dbo.Tab2 at
ON a.id = at.Tab1Id
AND cast(Getdate() AS DATE) BETWEEN at.Date1 AND at.Date2
INNER JOIN dbo.Tab5 v
ON v.Tab1Id = a.Id
INNER JOIN dbo.Tab16 g
ON g.Tab5Id = v.Id
INNER JOIN dbo.Tab3 vt
ON v.id = vt.Tab5Id
AND cast(Getdate() AS DATE) BETWEEN vt.Date1 AND vt.Date2
LEFT OUTER JOIN dbo.Tab4 vk
ON v.id = vk.Tab5Id
LEFT OUTER JOIN dbo.VerkaufsTab3 vkt
ON vk.id = vkt.Tab4Id
LEFT OUTER JOIN dbo.Plu p
ON p.Tab4Id = vk.Id
LEFT OUTER JOIN dbo.Tab15 bg
ON bg.Tab5Id = v.Id
LEFT OUTER JOIN dbo.Tab7 bgt
ON bgt.Tab15Id = bg.Id
AND cast(Getdate() AS DATE) BETWEEN bgt.Date1 AND bgt.Date2
LEFT OUTER JOIN dbo.Tab11 b
ON b.Tab15Id = bg.Id
LEFT OUTER JOIN dbo.Tab14 lf
ON lf.Id = b.Id
LEFT OUTER JOIN dbo.Tab8 lft
ON lft.Tab14Id = lf.Id
AND cast(Getdate() AS DATE) BETWEEN lft.Date1 AND lft.Date2
LEFT OUTER JOIN dbo.Tab13 lg
ON lg.Id = b.Id
LEFT OUTER JOIN dbo.Tab9 lgt
ON lgt.Tab13Id = lg.Id
AND cast(Getdate() AS DATE) BETWEEN lgt.Date1 AND lgt.Date2
LEFT OUTER JOIN dbo.Tab10 bl
ON bl.Tab11Id = b.Id
LEFT OUTER JOIN dbo.Tab6 blt
ON blt.Tab10Id = bl.Id
AND cast(Getdate() AS DATE) BETWEEN blt.Date1 AND blt.Date2
WHERE a.Nummer = 223889) B
On both the good and bad plans the Execution Plan shows "Reason for Early Termination of Statement Optimization" as "Time Out".
The two plans have slightly different join orders.
The only join in the plans not satisfied by an index seek is that on Tab9. This has 63,926 rows.
The missing index details in the execution plan suggest that you create the following index.
CREATE NONCLUSTERED INDEX [miising_index]
ON [dbo].[Tab9] ([Date1],[Date2])
INCLUDE ([Tab13Id])
The problematic part of the bad plan can be clearly seen in SQL Sentry Plan Explorer
SQL Server estimates that 1.349174 rows will be returned from the previous joins coming into the join on Tab9. And therefore costs the nested loops join as if it will need to execute the scan on the inside table 1.349174 times.
In fact 2,600 rows feed into that join meaning that it does 2,600 full scans of Tab9 (2,600 * 63,926 = 164,569,600 rows.)
It just so happens that on the good plan the estimated number of rows coming in to the join is 2.74319. This is still wrong by three orders of magnitude but the slightly increased estimate means SQL Server favors a hash join instead. A hash join just does one pass through Tab9
I would first try adding the missing index on Tab9.
Also/instead you might try updating the statistics on all tables involved (especially those with a date predicate such as Tab2 Tab3 Tab7 Tab8 Tab6) and see if that goes some way to correcting the huge discrepancy between estimated and actual rows on the left of the plan.
Also breaking the query up into smaller parts and materialising these into temporary tables with appropriate indexes might help. SQL Server can then use the statistics on these partial results to make better decisions for joins later in the plan.
Only as a last resort would I consider using query hints to try and force the plan with a hash join. Your options for doing that are either the USE PLAN hint in which case you dictate exactly the plan you want including all join types and orders or by stating LEFT OUTER HASH JOIN tab9 .... This second option also has the side effect of fixing all join orders in the plan. Both mean that SQL Server will be severely limited is its ability to adjust the plan with changes in data distribution.
It's hard to answer not knowing the size and structure of your tables, and not being able to see the entire execution plan. But the difference in both plans is Hash Match join for "top n" query vs Nested Loop join for the other one.
Hash Match is very resource intensive join, because the server has to prepare hash buckets in order to use it. But it becomes much more effective for big tables, while Nested Loops, comparing each row in one table to every row in another table works great for small tables, because there's no such preparation needed.
What I think is that by selecting TOP 1000000000000 rows in subquery you give the optimizer a hint that you're subquery will produce a great amount of data, so it uses Hash Match. But in fact the output is small, so Nested Loops works better.
What I just said is based on shreds of information, so please have heart criticising my answer ;).

ORA-22813 error with SQL complex query

I have a big SELECT statement which has many nested selects in it. When I run it, it gives me an ORA-22813 error:
Ora-22813:- The Collection value from one of the inner sub queries has exceeded the system limits and hence this error.
I have given below some of the nested selects which return huge data.
---The 1st select returns the most data.
Can I handle and process the huge data returned by the INNER SELECTs into the tables in any alternate way so that there is no error of memory less, sort size less.
get, any other way so that the QUERY successfully processes without error.
/*****************************************BEGIN
LEFT OUTER JOIN
( SELECT *
FROM STUDENT_COURSE stu_c
LEFT OUTER JOIN STUDENT_history ch on stu_c.course_id = ch.ch_course_id
LEFT OUTER JOIN STUDENT_master stu_mca on ch.course_history_id = stu_mca.item_id
) stu_c ON stu_c.HISTORY_ID = toa.ACTIVITY_ID ----->This table is joined earlier
LEFT OUTER JOIN
(SELECT c_e.EV_ID, c_e.EV_NAME, ma.item_id, ma.cata_id
FROM EVENTS c_e LEFT OUTER
JOIN COURSE_master ma on c_e.event_Id = ma.item_id ) c_e ON c_e.EVENT_ID = toa.ACTIVITY_ID
After these selects---we have GROUP_BYs to further sort.
---I have checked that if I put a extra limit qualification
like where rownum <30,<20 in each of these SELECTs it works fine.
Full query
SELECT * FROM (SELECT
mcat.CATALOG_ITEM_ID,
mcat.CATALOG_ITEM_NAME ,
mcat.DESCRIPTION,
mcat.CATALOG_ITEM_TYPE,
mcat.DELIVERY_METHOD,
XMLElement("TRAINING_PLAN",XMLAttributes( TP.TPLAN_ID as "id" ),
XMLELEMENT("COMPLETE_QUANTITY", TP.COMPLETE_QUANTITY),
XMLELEMENT("COMPLETE_UNIT", TP.COMPLETE_UNIT),
XMLElement("TOTAL_CREDITS", TP.numberOfCredits ),
XMLELEMENT("IS_CREDIT_BASED", TP.IS_CREDIT_BASED),
XMLELEMENT("IS_FOR_CERT", TP.IS_FOR_CERT),
XMLELEMENT("ACCREDIT_ORG_NAME", TP.ACCRED_ORG_NAME),
XMLELEMENT("ACCREDIT_ORG_ID", TP.accredit_org_id ),
XMLElement("OBJECTIVE_LIST", TP.OBJECTIVE_LIST )
).extract('/').getClobVal() AS PLAN_LIST
FROM
student_master_catalog mcat
INNER JOIN
(SELECT stu_tp.TPLAN_ID,
stu_tp.COMPLETE_QUANTITY,
stu_tp.COMPLETE_UNIT,
stu_tp.TPLAN_XML_DATA.extract('//numberOfCredits/text()').getStringVal() as numberOfCredits,
stu_tp.IS_CREDIT_BASED,
stu_tp.IS_FOR_CERT,
stu_oa.ACCRED_ORG_NAME,
stu_tp.TPLAN_XML_DATA.extract('//accreditingOrg/text()').getStringVal() as accredit_org_id,
objective_list.OBJECTIVE_LIST
FROM
student_training_catalog stu_tp
LEFT OUTER JOIN
stu_accrediting_org stu_oa on stu_tp.TPLAN_XML_DATA.extract('//accreditingOrg/text()').getStringVal() = stu_oa.ACCRED_ORG_ID
INNER JOIN
(SELECT *
FROM
(SELECT
stu_tpo.TPLAN_ID AS OBJECTIVE_TPLAN_ID,
XMLAgg(
XMLElement("OBJECTIVE",
XMLElement("OBJECTIVE_ID",stu_tpo.T_OBJECTIVE_ID ),
XMLElement("OBJECTIVE_NAME",stu_to.T_OBJECTIVE_NAME ),
XMLElement("OBJECTIVE_REQUIRED_CREDITS_OR_ACTIVITIES",stu_tpo.REQUIRED_CREDITS ),
XMLElement("ITEM_ORDER", stu_tpo.ITEM_ORDER ),
XMLElement("ACTIVITY_LIST", activity_list.ACTIVITY_LIST )
)
) as OBJECTIVE_LIST
FROM
stu_TP_OBJECTIVE stu_tpo
INNER JOIN
stu_TRAINING_OBJECTIVE stu_to ON stu_tpo.T_OBJECTIVE_ID = stu_to.T_OBJECTIVE_ID
INNER JOIN
(SELECT *
FROM
(SELECT stu_toa.T_OBJECTIVE_ID AS ACTIVITY_TOBJ_ID, XMLAgg(
XMLElement("ACTIVITY",
XMLElement("ACTIVITY_ID",stu_toa.ACTIVITY_ID ),
XMLElement("CATALOG_ID",COALESCE(stu_c.CATALOG_ID, COALESCE( stu_e.CATALOG_ID, stu_t.CATALOG_ID ) ) ),
XMLElement("CATALOG_ITEM_ID",COALESCE(stu_c.CATALOG_ITEM_ID, COALESCE( stu_e.CATALOG_ITEM_ID, stu_t.CATALOG_ITEM_ID ) ) ),
XMLElement("DELIVERY_METHOD",COALESCE(stu_c.DELIVERY_METHOD, COALESCE( stu_e.DELIVERY_METHOD, stu_t.DELIVERY_METHOD ) ) ),
XMLElement("ACTIVITY_NAME",COALESCE(stu_c.COURSE_NAME, COALESCE( stu_e.EVENT_NAME, stu_t.TEST_NAME ) ) ),
XMLElement("ACTIVITY_TYPE",initcap( stu_toa.ACTIVITY_TYPE ) ),
XMLElement("IS_REQUIRED",stu_toa.IS_REQUIRED ),
XMLElement("IS_PREFERRED",stu_toa.IS_PREFERRED ),
XMLElement("NUMBER_OF_CREDITS",stu_lac.CREDIT_HOURS),
XMLElement("ITEM_ORDER", stu_toa.ITEM_ORDER )
)) as ACTIVITY_LIST
FROM stu_TRAIN_OBJ_ACTIVITY stu_toa
LEFT OUTER JOIN
(
SELECT distinct lac.LEARNING_ACTIVITY_ID, lac.CREDIT_HOURS
FROM student_training_catalog tp
INNER JOIN stu_TP_OBJECTIVE tpo on tp.TPLAN_ID = tpo.TPLAN_ID
INNER JOIN stu_TRAIN_OBJ_ACTIVITY toa on tpo.T_OBJECTIVE_ID = toa.T_OBJECTIVE_ID
INNER JOIN stu_LEARNINGACTIVITY_CREDITS lac on lac.LEARNING_ACTIVITY_ID = toa.ACTIVITY_ID and tp.TPLAN_XML_DATA.extract ('//accreditingOrg/text()').getStringVal() = lac.ACC_ORG_ID
where tp.tplan_id ='*************'
) stu_lac ON stu_lac.LEARNING_ACTIVITY_ID = stu_toa.ACTIVITY_ID ------>This Select returns correct no. of rows
I want to join the below nested SELECTs with stu_toa.ACTIVITY_ID. This would solve my issues.
This below SELECT inside the LEFT OUTER JOIN is the Problem. it returns too much because 3 tables are joined directly without any value qualification.
LEFT OUTER JOIN
( SELECT ch.COURSE_HISTORY_ID, stu_c.COURSE_NAME, mca.catalog_item_id, mca.catalog_id, mca.delivery_method
FROM stu_COURSE stu_c
LEFT OUTER JOIN stu_course_history ch on stu_c.course_id = ch.ch_course_id -
--If I can qualify here with ch.ch_course_id = stu_toa.ACTIVITY_ID (stu_toa.ACTIVITY_ID from the above select with correct no. of rows )
--Here, I get errors because I can't access outside values inside a left outer join
LEFT OUTER JOIN student_master_catalog mca on ch.course_history_id = mca.catalog_item_id
) stu_c ON stu_c.COURSE_HISTORY_ID = stu_toa.ACTIVITY_ID
LEFT OUTER JOIN
(SELECT stu_e.EVENT_ID, stu_e.EVENT_NAME, mca.catalog_item_id, mca.catalog_id, mca.delivery_method FROM stu_EVENTS stu_e LEFT OUTER JOIN student_master_catalog mca on stu_e.event_Id = mca.catalog_item_id ) stu_e ON stu_e.EVENT_ID = stu_toa.ACTIVITY_ID
LEFT OUTER JOIN
(SELECT stu_t.TEST_HISTORY_ID, stu_t.TEST_NAME, mca.catalog_item_id, mca.catalog_id, mca.delivery_method FROM stu_TEST_HISTORY stu_t LEFT OUTER JOIN student_master_catalog mca on stu_t.test_history_id = mca.catalog_item_id) stu_t ON stu_t.test_history_id = stu_toa.ACTIVITY_ID
GROUP BY stu_toa.T_OBJECTIVE_ID) ) activity_list ON activity_list.ACTIVITY_TOBJ_ID = stu_tpo.T_OBJECTIVE_ID
GROUP BY stu_tpo.TPLAN_ID) ) objective_list ON objective_list.OBJECTIVE_TPLAN_ID = stu_tp.TPLAN_ID
)TP ON TP.TPLAN_ID = mcat.CATALOG_ITEM_ID
WHERE
mcat.CATALOG_ITEM_ID = '*****************' and mcat.CATALOG_ORG_ID = '********')
Please post the DDLs, approximate sizes (relative to each other), and the complete query, rather than just an excerpt.
Some quick hits that may or may not solve your problem (for better help, I need better information) --
Are you sure you mean OUTER join? Outer joining students to courses means students who are not taking any courses will still be around. Is that the desired behaviour?
Don't select * if you only want a limited subset of the columns. Enumerate the exact columns you need. The rest might not seem like much on a row-by-row basis, but when you multiply by the total number of rows you have, this sort of thing can mean the difference between in-memory sorts and spilling to disk.
How many rows of data are you looking at? there are times when separate queries with programmatic aggregation can work better. Someone with more knowledge of Oracle query optimization may be able to help, also, tweaking the settings could help here too...
I've had instances where a sproc was being called that aggregated data from more than one source took exponentially longer than two calls in the app, and putting it together in memory.
Post DDL of your tables and exact plan of the query.
Meanwhile, try increasing pga_aggregate_target, sort_area_size and hash_area_size