Why does separate table perform significantly better than subquery? - sql

I was trying to improve performance of a SQL query and tried few combinations.
Original Query
SELECT ALIAS_A.id1,
ALIAS_A.id2,
ALIAS_B.columnA,
ALIAS_C.columnB,
ALIAS_B.columnC
FROM db_A.table_A ALIAS_A
LEFT OUTER JOIN db_A.table_B ALIAS_B
ON ALIAS_A.id2 = ALIAS_B.id2
LEFT OUTER JOIN db_B.table_C ALIAS_C
ON ALIAS_B.columnA = ALIAS_C.item_num
LEFT OUTER JOIN db_A.table_D ALIAS_D
ON ALIAS_A.id2 = ALIAS_D.id2
INNER JOIN db_C.table_E ALIAS_E
ON Cast(ALIAS_A.column_date AS DATE) BETWEEN
ALIAS_E.column_startdate AND ALIAS_E.column_enddate
WHERE ALIAS_E.fiscalyear >= 2016
AND Cast(ALIAS_A.columnD AS DATE) BETWEEN
CURRENT_DATE - 5 AND CURRENT_DATE
The above query consumes nearly 400k impactCPU
Optimized Query 1
SELECT New_sub_table.id1,
New_sub_table.id2,
ALIAS_B.columnA,
ALIAS_C.columnB,
ALIAS_B.columnC
--changed part start--
FROM ( sel * from db_A.table_A ALIAS_A WHERE Cast(ALIAS_A.columnD AS DATE) BETWEEN
CURRENT_DATE - 5 AND CURRENT_DATE ) New_sub_table -- created a subquery
--changed part end--
LEFT OUTER JOIN db_A.table_B ALIAS_B
ON New_sub_table.id2 = ALIAS_B.id2
LEFT OUTER JOIN db_B.table_C ALIAS_C
ON ALIAS_B.columnA = ALIAS_C.item_num
LEFT OUTER JOIN db_A.table_D ALIAS_D
ON New_sub_table.id2 = ALIAS_D.id2
INNER JOIN db_C.table_E ALIAS_E
ON Cast(New_sub_table.column_date AS DATE) BETWEEN
ALIAS_E.column_startdate AND ALIAS_E.column_enddate
WHERE ALIAS_E.fiscalyear >= 2016
I thought to filter the data first and then do the joins. After I checked the performance stats. It was consuming nearly 390k CPU. Not much of a difference.
Optimized Query 2
SELECT ALIAS_A.id1,
ALIAS_A.id2,
ALIAS_B.columnA,
ALIAS_C.columnB,
ALIAS_B.columnC
--changed part start--
FROM INTERMEDIATE_DB.INTERMEDIATE_TABLE ALIAS_A --CREATED AN INTERMEDIATE TABLE
--changed part end--
LEFT OUTER JOIN db_A.table_B ALIAS_B
ON ALIAS_A.id2 = ALIAS_B.id2
LEFT OUTER JOIN db_B.table_C ALIAS_C
ON ALIAS_B.columnA = ALIAS_C.item_num
LEFT OUTER JOIN db_A.table_D ALIAS_D
ON ALIAS_A.id2 = ALIAS_D.id2
INNER JOIN db_C.table_E ALIAS_E
ON Cast(ALIAS_A.column_date AS DATE) BETWEEN
ALIAS_E.column_startdate AND ALIAS_E.column_enddate
WHERE ALIAS_E.fiscalyear >= 2016
MACRO for loading data into intermediate table
INSERT INTO INTERMEDIATE_DB.INTERMEDIATE_TABLE
sel * from db_A.table_A ALIAS_A WHERE Cast(ALIAS_A.columnD AS DATE) BETWEEN
CURRENT_DATE - 5 AND CURRENT_DATE
So what I did here was. I used an intermediate table instead of subquery. The intermediate table gets loaded via the macro first and then the select query runs. It now consumes only 50k impactCPU (for both Macro and Select query combined).
My question -
I am unable to reason why this is happening even though the logic behind both queries is same (or so I think it is). What would be the best practice if this is incorrect way ?

Your main problem is the Cast(ALIAS_A.columnD AS DATE). When you check Explains you will notice the optimizer has no confidence for this step, probably greatly overestimating the number of rows returned.
But when you materialize the Select the number of rows is better known and the order of joins changes.
You would probably get the same plan when you Collect Statistics on the Cast(ALIAS_A.columnD AS DATE), run DIAGNOSTIC HELPSTATS ON FOR SESSION; and Explain should show you this as recommended stats.

Related

Converting a SQL subquery to a join for performance gains

I have a subquery with an inner join, This join is meant to cut down the data size to a more manageable size, before extracting data via an unpivot which has a further join to only pull out relevant matches..
When i've looked at the execution plan, it seems like the outer select is being executed first and thus taking an inordinate amount of time to complete as it is processing the data for all gamers instead of the cut down cohort..
this is the query
SELECT
t2.Gamer_ID,
C.Feature_Code,
C.Feature_Name,
t2.CODE_DATE
FROM
(
SELECT
CAST(A.Gamer_ID
,[Identification_Code_Code_1]
,[Identification_Code_Code_2]
,[Identification_Code_Code_3]
,[Identification_Code_Code_4]
,[Identification_Code_Code_5]
,[Identification_Code_Code_6]
,[Identification_Code_Code_7]
,[Identification_Code_Code_8]
,[Identification_Code_Code_9]
,[Identification_Code_Code_10]
,CAST(Joining_date AS DATE) AS CODE_DATE
FROM Gamer_Characteristics A
INNER JOIN Gamer_Population P ON P.Gamer_ID = A.Gamer_ID --cuts down the number of gamers to the selected cohort
) s
unpivot (CODE for col in (
[Identification_Code_Code_1]
,[Identification_Code_Code_2]
,[Identification_Code_Code_3]
,[Identification_Code_Code_4]
,[Identification_Code_Code_5]
,[Identification_Code_Code_6]
,[Identification_Code_Code_7]
,[Identification_Code_Code_8]
,[Identification_Code_Code_9]
,[Identification_Code_Code_10])) as t2
INNER JOIN Gamer_feature_Code C ON C.CODE = LEFT(t2.CODE,C.CODE_LENGTH) --join to a dimension table to pull through characteristcs based on code and code length
WHERE
T2.CODE_DATE <= '2020-03-31'
GROUP BY t2.Gamer_ID,
C.Feature_Code,
C.Feature_Name,
t2.CODE_DATE
I have two questions.
1: can this be converted to use a join instead of subquery
2: Can i force the inner join in the subquery to take precedence over the inner join in the outer select?

Google Bigquery says "Response too large to return" with simple select

Modifier allowLargeResults is set on and I have also tried interactive and batch query priority.
There are 70M records in table search_results, 10M records in searches and about (just) 900 in buy table. And also the WHERE reduces the number of rows pretty well.
SELECT
s.flyFrom, s.to, s.typeFlight, r.price, b.price, b.affily
FROM [sptest.buy] AS b
INNER JOIN [sptest.search_results] AS r
ON b.booking_token=r.booking_token
INNER JOIN [sptest.searches] AS s
ON s.searchid=r.searchid
WHERE
DATE(r.saved_at) >= DATE('2015-06-23 00:00:00') AND
DATE(s.saved_at) >= DATE('2015-06-23 00:00:00')
LIMIT 10
Could the problem be caused by large joining keys? The booking_token key is variable size 50-600 chars.
I would do couple modifications to this query:
Move WHERE clause filters closer to the table scan
Use JOIN EACH construct
SELECT
s.flyFrom, s.to, s.typeFlight, r.price, b.price, b.affily
FROM [sptest.buy] AS b
INNER JOIN EACH
(SELECT * FROM [sptest.search_results] WHERE saved_at > DATE('2015-06-23 00:00:00')) AS r
ON b.booking_token=r.booking_token
INNER JOIN EACH
(SELECT * FROM [sptest.searches] WHERE saved_at > DATE('2015-06-23 00:00:00') AS s
ON s.searchid=r.searchid
LIMIT 10

SQL query that uses a GROUP BY and IN is too slow

I am struggling to speed this SQL query up. I have tried removing all the fields besides the two SUM() functions and the Id field but it is still incredibly slow. It is currently taking 15 seconds to run. Does anyone have any suggestions to speed this up as it is currently causing a timeout on a page in my web app. I need the fields shown so I can't really remove them but there surely has to be a way to improve this?
SELECT [Customer].[iCustomerID],
[Customer].[sCustomerSageCode],
[Customer].[sCustomerName],
[Customer].[sCustomerTelNo1],
SUM([InvoiceItem].[fQtyOrdered]) AS [Quantity],
SUM([InvoiceItem].[fNetAmount]) AS [Value]
FROM [dbo].[Customer]
LEFT JOIN [dbo].[CustomerAccountStatus] ON ([Customer].[iAccountStatusID] = [CustomerAccountStatus].[iAccountStatusID])
LEFT JOIN [dbo].[SalesOrder] ON ([SalesOrder].[iCustomerID] = [dbo].[Customer].[iCustomerID])
LEFT JOIN [Invoice] ON ([Invoice].[iCustomerID] = [Customer].[iCustomerID])
LEFT JOIN [dbo].[InvoiceItem] ON ([Invoice].[iInvoiceNumber] = [InvoiceItem].[iInvoiceNumber])
WHERE ([InvoiceItem].[sNominalCode] IN ('4000', '4001', '4002', '4004', '4005', '4006', '4007', '4010', '4015', '4016', '700000', '701001', '701002', '701003'))
AND( ([dbo].[SalesOrder].[dOrderDateTime] >= '2013-01-01')
OR ([dbo].[Customer].[dDateCreated] >= '2014-01-01'))
GROUP BY [Customer].[iCustomerID],[Customer].[sCustomerSageCode],[Customer].[sCustomerName], [Customer].[sCustomerTelNo1];
I don't think this query is doing what you want anyway. As written, there are no relationships between the Invoice table and the SalesOrder table. This leads me to believe that it is producing a cartesian product between invoices and orders, so customers with lots of orders would be generating lots of unnecessary intermediate rows.
You can test this by removing the SalesOrder table from the query:
SELECT c.[iCustomerID], c.[sCustomerSageCode], c.[sCustomerName], c.[sCustomerTelNo1],
SUM(it.[fQtyOrdered]) AS [Quantity], SUM(it.[fNetAmount]) AS [Value]
FROM [dbo].[Customer] c LEFT JOIN
[dbo].[CustomerAccountStatus] cas
ON c.[iAccountStatusID] = cas.[iAccountStatusID] LEFT JOIN
[Invoice] i
ON (i.[iCustomerID] = c.[iCustomerID]) LEFT JOIN
[dbo].[InvoiceItem] it
ON (i.[iInvoiceNumber] = it.[iInvoiceNumber])
WHERE it.[sNominalCode] IN ('4000', '4001', '4002', '4004', '4005', '4006', '4007', '4010', '4015', '4016', '700000', '701001', '701002', '701003') AND
c.[dDateCreated] >= '2014-01-01'
GROUP BY c.[iCustomerID], c.[sCustomerSageCode], c.[sCustomerName], c.[sCustomerTelNo1];
If this works and you need the SalesOrder, then you will need to either pre-aggregate by SalesOrder or find better join keys.
The above query could benefit from an index on Customer(dDateCreated, CustomerId).
You have a lot of LEFT JOIN
I don't see CustomerAccountStatus usage. Ou can exclude it
The [InvoiceItem].[sNominalCode] could be null in case of LEFT JOIN so add [InvoiceItem].[sNominalCode] is not null or <THE IN CONDITION>
Also add the is not null checks to other conditions
It seems you are looking for customers that are either created this year or for which sales orders exist from last year or this year. So select from customers, but use EXISTS on SalesOrder. Then you want to count invoices. So outer join them and make sure to have the criteria in the ON clause. (sNominalCode will be NULL for any outer joined records. Hence asking for certain sNominalCode in the WHERE clause will turn your outer join into an inner join.)
SELECT
c.iCustomerID,
c.sCustomerSageCode,
c.sCustomerName,
c.sCustomerTelNo1,
SUM(ii.fQtyOrdered) AS Quantity,
SUM(ii.fNetAmount) AS Value
FROM dbo.Customer c
LEFT JOIN dbo.Invoice i ON (i.iCustomerID = c.iCustomerID)
LEFT JOIN dbo.InvoiceItem ii ON (ii.iInvoiceNumber = i.iInvoiceNumber AND ii.sNominalCode IN ('4000', '4001', '4002', '4004', '4005', '4006', '4007', '4010', '4015', '4016', '700000', '701001', '701002', '701003'))
WHERE c.dDateCreated >= '2014-01-01'
OR EXISTS
(
SELECT *
FROM dbo.SalesOrder
WHERE iCustomerID = c.iCustomerID
AND dOrderDateTime >= '2013-01-01'
)
GROUP BY c.iCustomerID, c.sCustomerSageCode, c.sCustomerName, c.sCustomerTelNo1;

Improve Performance of SQL query joining 14 tables

I am trying to join 14 tables in which few tables I need to join using left join.
With the existing data which is around 7000 records,its taking around 10 seconds to execute the below query.I am afraid what if the records are more than million.Please help me improve the performance of the below query.
CREATE proc [dbo].[GetTodaysActualInvoiceItemSoldHistory]
#fromdate datetime,
#todate datetime
as
Begin
select SDID.InvoiceDate as [Sold Date],Cust.custCompanyName as [Sold To] ,
case SQBD.TransferNo when '0' then IVM.VendorName else SQBD.TransferNo end as [Purchase From],
SQBD.BatchSellQty as SoldQty,SQID.SellPrice,
SDID.InvoiceNo as [Sales Invoice No],INV.PRInvoiceNo as [PO Invoice No],INV.PRInvoiceDate as [PO Invoice Date],
SQID.ItemDesc as [Item Description],SQID.NetPrice,SDHM.DeliveryHeaderMasterName as DeliveryHeaderName,
SQID.ItemCode as [Item Code],
SQBD.BatchNo,SQBD.ExpiryDate,SQID.Amount,
SQID.Dept_ID as Dept_ID,
Dept_Name as [Department],SQID.Catg_ID as Catg_ID,
Category_Name as [Category],SQID.Brand_ID as Brand_ID,
BrandName as BrandName, SQID.Manf_Id as Manf_Id,
Manf.ManfName as [Manufacturer],
STM.TaxName, SQID.Tax_ID as Tax_ID,
INV.VendorID as VendorID,
SQBD.ItemID,SQM.Isdeleted,
SDHM.DeliveryHeaderMasterID,Cust.CustomerMasterID
from SD_QuotationMaster SQM
inner join SD_InvoiceDetails SDID on SQM.QuoteID = SDID.QuoteID
inner join SD_QuoteItemDetails SQID on SDID.QuoteID = SQID.QuoteID
inner join SD_QuoteBatchDetails SQBD on SDID.QuoteID = SQBD.QuoteID and SQID.ItemID=SQBD.ItemID
inner join INV_ProductInvoice INV on SQBD.InvoiceID=INV.ProductInvoiceID
inner jOIN INV_VendorMaster IVM ON INV.VendorID = IVM.VendorID
inner jOIN Sys_TaxMaster STM ON SQID.Tax_ID = STM.Tax_ID
inner join Cust_CustomerMaster Cust on SQM.CustomerMasterID = Cust.CustomerMasterID
left jOIN INV_DeptartmentMaster Dept ON SQID.Dept_ID = Dept.Dept_ID
left jOIN INV_BrandMaster BRD ON SQID.Brand_ID = BRD.Brand_ID
left jOIN INV_ManufacturerMaster Manf ON SQID.Manf_Id = Manf.Manf_Id
left join INV_CategoryMaster CAT ON SQID.Catg_ID = CAT.Catg_ID
left join SLRB_DeliveryCustomerMaster SDCM on SQM.CustomerMasterID=SDCM.CustomerMasterID and SQM.DeliveryHeaderMasterID=SDCM.DeliveryHeaderMasterID
left join SLRB_DeliveryHeaderMaster SDHM on SDCM.DeliveryHeaderMasterID=SDHM.DeliveryHeaderMasterID
where (SQM.IsDeleted=0) and SQBD.BatchSellQty > 0
and SDID.InvoiceDate between #fromdate and #todate
order by ItemDesc
End
Only the below tables contain more data while other tables have records less than 20
InvoiceDetails, QuoteMaster, QuoteItemDetails, QuoteBatchDetails ProductInvoice
Below is the link for execution plan
http://jmp.sh/CSZc2x2
Thanks.
Let's start with an obvious error:
(isnull(SQBD.BatchSellQty,0) > 0)
That one is not indexable, so it should not happen. Seriously, BatchSellQty should not be unknown (nullable) in most cases, or you better handle null properly. That field should be indexed and I am not sure I would like that with an isNull - there are likely tons of batches. Also note that a filtered index (condition >0) may work here.
Second, check that you have all proper indices and the execution plan makes sense.
Thids, you have to test with a ton of data. Index statistics may make a difference. Check where the time is spent - it may be tempdb in which case you really need a good tempdb IO speed.... and it is not realted to the input side.
You can try to use query hints to help SQL Server optimizer build a optimal query execution plan. For example, you can force the order of tables will be joined, using FORCE ORDER statement. If you order your tables in order that joins with minimum result size at each step, query will execute faster (may be, needs to try). Example:
We need to A join B join C
If A join B = 2000 records x 1000 records = ~400 records (we suspect this result)
And A join C = 2000 records x 10 records = ~3 records (and this)
And B join C = 1000 records x 10 records = 10 000 records (and this)
In this case optimal order will be
A join C join B = ~3 records x 1000 records = ~3000 records

Query faster with top attribute

Why is this query faster in SQL Server 2008 R2 (Version 10.50.2806.0)
SELECT
MAX(AtDate1),
MIN(AtDate2)
FROM
(
SELECT TOP 1000000000000
at.Date1 AS AtDate1,
at.Date2 AS AtDate2
FROM
dbo.tab1 a
INNER JOIN
dbo.tab2 at
ON
a.id = at.RootId
AND CAST(GETDATE() AS DATE) BETWEEN at.Date1 AND at.Date2
WHERE
a.Number = 223889
)B
then
SELECT
MAX(AtDate1),
MIN(AtDate2)
FROM
(
SELECT
at.Date1 AS AtDate1,
at.Date2 AS AtDate2
FROM
dbo.tab1 a
INNER JOIN
dbo.tab2 at
ON
a.id = at.RootId
AND CAST(GETDATE() AS DATE) BETWEEN at.Date1 AND at.Date2
WHERE
a.Number = 223889
)B
?
The second statement with the TOP attribute is six times faster.
The count(*) of the inner subquery is 9280 rows.
Can I use a HINT to declare that SQL Server optimiser make it right?
I see you've now posted the plans. Just luck of the draw.
Your actual query is a 16 table join.
SELECT max(atDate1) AS AtDate1,
min(atDate2) AS AtDate2,
max(vtDate1) AS vtDate1,
min(vtDate2) AS vtDate2,
max(bgtDate1) AS bgtDate1,
min(bgtDate2) AS bgtDate2,
max(lftDate1) AS lftDate1,
min(lftDate2) AS lftDate2,
max(lgtDate1) AS lgtDate1,
min(lgtDate2) AS lgtDate2,
max(bltDate1) AS bltDate1,
min(bltDate2) AS bltDate2
FROM (SELECT TOP 100000 at.Date1 AS atDate1,
at.Date2 AS atDate2,
vt.Date1 AS vtDate1,
vt.Date2 AS vtDate2,
bgt.Date1 AS bgtDate1,
bgt.Date2 AS bgtDate2,
lft.Date1 AS lftDate1,
lft.Date2 AS lftDate2,
lgt.Date1 AS lgtDate1,
lgt.Date2 AS lgtDate2,
blt.Date1 AS bltDate1,
blt.Date2 AS bltDate2
FROM dbo.Tab1 a
INNER JOIN dbo.Tab2 at
ON a.id = at.Tab1Id
AND cast(Getdate() AS DATE) BETWEEN at.Date1 AND at.Date2
INNER JOIN dbo.Tab5 v
ON v.Tab1Id = a.Id
INNER JOIN dbo.Tab16 g
ON g.Tab5Id = v.Id
INNER JOIN dbo.Tab3 vt
ON v.id = vt.Tab5Id
AND cast(Getdate() AS DATE) BETWEEN vt.Date1 AND vt.Date2
LEFT OUTER JOIN dbo.Tab4 vk
ON v.id = vk.Tab5Id
LEFT OUTER JOIN dbo.VerkaufsTab3 vkt
ON vk.id = vkt.Tab4Id
LEFT OUTER JOIN dbo.Plu p
ON p.Tab4Id = vk.Id
LEFT OUTER JOIN dbo.Tab15 bg
ON bg.Tab5Id = v.Id
LEFT OUTER JOIN dbo.Tab7 bgt
ON bgt.Tab15Id = bg.Id
AND cast(Getdate() AS DATE) BETWEEN bgt.Date1 AND bgt.Date2
LEFT OUTER JOIN dbo.Tab11 b
ON b.Tab15Id = bg.Id
LEFT OUTER JOIN dbo.Tab14 lf
ON lf.Id = b.Id
LEFT OUTER JOIN dbo.Tab8 lft
ON lft.Tab14Id = lf.Id
AND cast(Getdate() AS DATE) BETWEEN lft.Date1 AND lft.Date2
LEFT OUTER JOIN dbo.Tab13 lg
ON lg.Id = b.Id
LEFT OUTER JOIN dbo.Tab9 lgt
ON lgt.Tab13Id = lg.Id
AND cast(Getdate() AS DATE) BETWEEN lgt.Date1 AND lgt.Date2
LEFT OUTER JOIN dbo.Tab10 bl
ON bl.Tab11Id = b.Id
LEFT OUTER JOIN dbo.Tab6 blt
ON blt.Tab10Id = bl.Id
AND cast(Getdate() AS DATE) BETWEEN blt.Date1 AND blt.Date2
WHERE a.Nummer = 223889) B
On both the good and bad plans the Execution Plan shows "Reason for Early Termination of Statement Optimization" as "Time Out".
The two plans have slightly different join orders.
The only join in the plans not satisfied by an index seek is that on Tab9. This has 63,926 rows.
The missing index details in the execution plan suggest that you create the following index.
CREATE NONCLUSTERED INDEX [miising_index]
ON [dbo].[Tab9] ([Date1],[Date2])
INCLUDE ([Tab13Id])
The problematic part of the bad plan can be clearly seen in SQL Sentry Plan Explorer
SQL Server estimates that 1.349174 rows will be returned from the previous joins coming into the join on Tab9. And therefore costs the nested loops join as if it will need to execute the scan on the inside table 1.349174 times.
In fact 2,600 rows feed into that join meaning that it does 2,600 full scans of Tab9 (2,600 * 63,926 = 164,569,600 rows.)
It just so happens that on the good plan the estimated number of rows coming in to the join is 2.74319. This is still wrong by three orders of magnitude but the slightly increased estimate means SQL Server favors a hash join instead. A hash join just does one pass through Tab9
I would first try adding the missing index on Tab9.
Also/instead you might try updating the statistics on all tables involved (especially those with a date predicate such as Tab2 Tab3 Tab7 Tab8 Tab6) and see if that goes some way to correcting the huge discrepancy between estimated and actual rows on the left of the plan.
Also breaking the query up into smaller parts and materialising these into temporary tables with appropriate indexes might help. SQL Server can then use the statistics on these partial results to make better decisions for joins later in the plan.
Only as a last resort would I consider using query hints to try and force the plan with a hash join. Your options for doing that are either the USE PLAN hint in which case you dictate exactly the plan you want including all join types and orders or by stating LEFT OUTER HASH JOIN tab9 .... This second option also has the side effect of fixing all join orders in the plan. Both mean that SQL Server will be severely limited is its ability to adjust the plan with changes in data distribution.
It's hard to answer not knowing the size and structure of your tables, and not being able to see the entire execution plan. But the difference in both plans is Hash Match join for "top n" query vs Nested Loop join for the other one.
Hash Match is very resource intensive join, because the server has to prepare hash buckets in order to use it. But it becomes much more effective for big tables, while Nested Loops, comparing each row in one table to every row in another table works great for small tables, because there's no such preparation needed.
What I think is that by selecting TOP 1000000000000 rows in subquery you give the optimizer a hint that you're subquery will produce a great amount of data, so it uses Hash Match. But in fact the output is small, so Nested Loops works better.
What I just said is based on shreds of information, so please have heart criticising my answer ;).