Sql View with WHERE clause runs slower than a raw query - sql

This runs in a constant time:
SELECT row_number() OVER (order by PackagingUniqueId) as RowNum, Barcode, pu.PackagingUniqueId,
rd.Name, pu.ComponentBarcode, rrl.ponum, rrl.mfgpart, rrl.new_lot_code, rrl.pno
FROM Trace.dbo.TraceData td
INNER JOIN Trace.dbo.TraceJob tj ON td.Id = tj.TraceDataId
INNER JOIN Trace.dbo.Job j ON tj.JobId = j.Id
INNER JOIN Trace.dbo.[Order] o ON j.OrderId = o.id
INNER JOIN Trace.dbo.PCBBarcode p ON td.PCBBarcodeId = p.Id
INNER JOIN Trace.dbo.TracePlacement tp ON td.Id = tp.TraceDataId
INNER JOIN Trace.dbo.Placement p2 ON p2.PlacementGroupId = tp.PlacementGroupId
INNER JOIN Trace.dbo.Charge c ON p2.ChargeId = c.Id
INNER JOIN Trace.dbo.PackagingUnit pu ON c.PackagingUnitId = pu.Id
INNER JOIN Trace.dbo.RefDesignator rd ON p2.RefDesignatorId = rd.Id
INNER JOIN SpotlightSQL.spot_light_dbo.peel_off_ids po ON po.peel_off_id = pu.PackagingUniqueId
INNER JOIN SpotlightSQL.spot_light_dbo.recv_receipts_log rrl ON rrl.label_id = po.label_id
WHERE p.Barcode = '20092619153'
However, this one takes about 7 seconds:
SELECT * FROM Component WHERE Barcode = '20092619153'
Component is a SQL view which consists of the first longer query without WHERE clause.
Why this happens? Does the view retrieve all records and then apply Where clause? Is there a way to speed up the second query? (without applying indexes)

Why this happens? Does the view retrieve all records and then apply Where clause?
Yes, in this particular case, SQL Server will first execute the original underlying query, and then apply a WHERE filter on top of that intermediate result.
Is there a way to speed up the second query? (without applying indexes)
A SQL view generally performs as well as the underlying query. So, if Barcode is a good way to filter off many records, then adding an index to Barcode is the way to go. Other than this, there is not much you can do to speed up the view.
One option would be to create a materialized view, which is basically just a table whose data is generated by your view's query. Selecting all records from a materialized view, with no additional restrictions, should have a speed limited only by the time of data transfer.

Related

How to improve SQL inner join performance?

How improve this query performance second table CustomerAccountBrand inner join
taking long time. I have added Non clustered index that is not use. Is this is split two inner join after that able concatenate?. Please any one help to get that data.
SELECT DISTINCT
RA.AccountNumber,
RA.ShipTo,
RA.SystemCode,
CAB.BrandCode
FROM dbo.CustomerAccountRelatedAccounts RA -- Views
INNER JOIN dbo.CustomerAccount CA
ON RA.RelatedAccountNumber = CA.AccountNumber
AND RA.RelatedShipTo = CA.ShipTo
AND RA.RelatedSystemCode = CA.SystemCode
INNER JOIN dbo.CustomerAccountBrand CAB ---- Taking long time 4:30 mins
ON CA.AccountNumber = CAB.AccountNumber
AND CA.ShipTo = CAB.ShipTo
AND CA.SystemCode = CAB.SystemCode
ALTER VIEW [dbo].[CustomerAccountRelatedAccounts]
AS
SELECT
ca.AccountNumber, ca.ShipTo, ca.SystemCode, cafg.AccountNumber AS RelatedAccountNumber, cafg.ShipTo AS RelatedShipTo,
cafg.SystemCode AS RelatedSystemCode
FROM dbo.CustomerAccount AS ca
LEFT OUTER JOIN dbo.CustomerAccount AS cafg
ON ca.FinancialGroup = cafg.FinancialGroup
AND ca.NationalAccount = cafg.NationalAccount
AND cafg.IsActive = 1
WHERE CA.IsActive = 1
From my experience, the SQL server query optimizer often fails to pick the correct join algorithm when queries become more complex (e.g. joining with your view means that there's no index readily available to join on). If that's what's happening here, then the easy fix is to add a join hint to turn it into a hash join:
SELECT DISTINCT
RA.AccountNumber,
RA.ShipTo,
RA.SystemCode,
CAB.BrandCode
FROM dbo.CustomerAccountRelatedAccounts RA -- Views
INNER JOIN dbo.CustomerAccount CA
ON RA.RelatedAccountNumber = CA.AccountNumber
AND RA.RelatedShipTo = CA.ShipTo
AND RA.RelatedSystemCode = CA.SystemCode
INNER HASH JOIN dbo.CustomerAccountBrand CAB ---- Note the "HASH" keyword
ON CA.AccountNumber = CAB.AccountNumber
AND CA.ShipTo = CAB.ShipTo
AND CA.SystemCode = CAB.SystemCode

SQL - faster to filter by large table or small table

I have the below query which takes a while to run, since ir_sales_summary is ~ 2 billion rows:
select c.ChainIdentifier, s.SupplierIdentifier, s.SupplierName, we.Weekend,
sum(sales_units_cy) as TY_unitSales, sum(sales_cost_cy) as TY_costDollars, sum(sales_units_ret_cy) as TY_retailDollars,
sum(sales_units_ly) as LY_unitSales, sum(sales_cost_ly) as LY_costDollars, sum(sales_units_ret_ly) as LY_retailDollars
from ir_sales_summary i
left join Chains c
on c.ChainID = i.ChainID
inner join Suppliers s
on s.SupplierID = i.SupplierID
inner join tmpWeekend we
on we.SaleDate = i.saledate
where year(i.saledate) = '2017'
group by c.ChainIdentifier, s.SupplierIdentifier, s.SupplierName, we.Weekend
(Worth noting, it takes roughly 3 hours to run since it is using a view that brings in data from a legacy service)
I'm thinking there's a way to speed up the filtering, since I just need the data from 2017. Should I be filtering from the big table (i) or be filtering from the much smaller weekending table (which gives us just the week ending dates)?
Try this. This might help, joining a static table as first table in query onto a fact/dynamic table will impact query performance i believe.
SELECT c.ChainIdentifier
,s.SupplierIdentifier
,s.SupplierName
,i.Weekend
,sum(sales_units_cy) AS TY_unitSales
,sum(sales_cost_cy) AS TY_costDollars
,sum(sales_units_ret_cy) AS TY_retailDollars
,sum(sales_units_ly) AS LY_unitSales
,sum(sales_cost_ly) AS LY_costDollars
,sum(sales_units_ret_ly) AS LY_retailDollars
FROM Suppliers s
INNER JOIN (
SELECT we
,weeekend
,supplierid
,chainid
,sales_units_cy
,sales_cost_cy
,sales_units_ret_cy
,sales_units_ly
,sales_cost_ly
,sales_units_ret_ly
FROM ir_sales_summary i
INNER JOIN tmpWeekend we
ON we.SaleDate = i.saledate
WHERE year(i.saledate) = '2017'
) i
ON s.SupplierID = i.SupplierID
INNER JOIN Chains c
ON c.ChainID = i.ChainID
GROUP BY c.ChainIdentifier
,s.SupplierIdentifier
,s.SupplierName
,i.Weekend

Why does my SELECT return only a subset of the rows?

I'm working with an Oracle database for the first time and stumbled upon another problem again. When I want to select all the rows in a table with some JOINS I only get the first 350 rows of about 15.000 rows.
Anyone know if there is a set limit somewhere I'm not aware of?
Below is my query if needed:
SELECT orders.plant, orders.workcenter, workcenters.occupied,
workcenters.section, workcentersections.section, orders.capacitycat,
orders.week, orders.earlieststartdate, orders.lateststartdate,
orders.useropstatus, orders.programstatus, orders.reqhours,
orders.finishdate, orders.reqquantity, orders.material, parts.TYPE,
parttypes.TYPE, orders.ordernumber, orders.operation,
orders.preoperation, orders.seqoperation, orders.projectcode,
orders.queuetime, orders.hoursworked, orders.operationtext,
orders.shorttext
FROM (((orders INNER JOIN workcenters ON orders.workcenter =
workcenters.code)
INNER JOIN
workcentersections ON workcenters.section = workcentersections.ID)
INNER JOIN
parts ON orders.material = parts.material)
INNER JOIN
parttypes ON parts.TYPE = parttypes.ID
Assuming your ORDERS table contains 15000 rows and your original query returns only 350 rows, you can replace your (INNER) JOINs with OUTER JOINs:
SELECT orders.plant, orders.workcenter, workcenters.occupied,
workcenters.section, workcentersections.section, orders.capacitycat,
orders.week, orders.earlieststartdate, orders.lateststartdate,
orders.useropstatus, orders.programstatus, orders.reqhours,
orders.finishdate, orders.reqquantity, orders.material, parts.TYPE,
parttypes.TYPE, orders.ordernumber, orders.operation,
orders.preoperation, orders.seqoperation, orders.projectcode,
orders.queuetime, orders.hoursworked, orders.operationtext,
orders.shorttext
FROM orders
LEFT OUTER JOIN workcenters ON orders.workcenter = workcenters.code
LEFT OUTER JOIN workcentersections
ON workcenters.section = workcentersections.ID
LEFT OUTER JOIN parts ON orders.material = parts.material
LEFT OUTER JOIN parttypes ON parts.TYPE = parttypes.ID
This will give you all rows from ORDERS (you might get duplicates if you haven't got strict 1:N relationships).
Then, you should replace the LEFT OUTER JOINs one-by-one with INNER JOINs and check the row count of each of these modified queries to find out which of the JOINs is responsible for the missing data.

How to improve the performance of a SQL query even after adding indexes?

I am trying to execute the following sql query but it takes 22 seconds to execute. the number of returned items is 554192. I need to make this faster and have already put indexes in all the tables involved.
SELECT mc.name AS MediaName,
lcc.name AS Country,
i.overridedate AS Date,
oi.rating,
bl1.firstname + ' ' + bl1.surname AS Byline,
b.id BatchNo,
i.numinbatch ItemNumberInBatch,
bah.changedatutc AS BatchDate,
pri.code AS IssueNo,
pri.name AS Issue,
lm.neptunemessageid AS MessageNo,
lmt.name AS MessageType,
bl2.firstname + ' ' + bl2.surname AS SourceFullName,
lst.name AS SourceTypeDesc
FROM profiles P
INNER JOIN profileresults PR
ON P.id = PR.profileid
INNER JOIN items i
ON PR.itemid = I.id
INNER JOIN batches b
ON b.id = i.batchid
INNER JOIN itemorganisations oi
ON i.id = oi.itemid
INNER JOIN lookup_mediachannels mc
ON i.mediachannelid = mc.id
LEFT OUTER JOIN lookup_cities lc
ON lc.id = mc.cityid
LEFT OUTER JOIN lookup_countries lcc
ON lcc.id = mc.countryid
LEFT OUTER JOIN itembylines ib
ON ib.itemid = i.id
LEFT OUTER JOIN bylines bl1
ON bl1.id = ib.bylineid
LEFT OUTER JOIN batchactionhistory bah
ON b.id = bah.batchid
INNER JOIN itemorganisationissues ioi
ON ioi.itemorganisationid = oi.id
INNER JOIN projectissues pri
ON pri.id = ioi.issueid
LEFT OUTER JOIN itemorganisationmessages iom
ON iom.itemorganisationid = oi.id
LEFT OUTER JOIN lookup_messages lm
ON iom.messageid = lm.id
LEFT OUTER JOIN lookup_messagetypes lmt
ON lmt.id = lm.messagetypeid
LEFT OUTER JOIN itemorganisationsources ios
ON ios.itemorganisationid = oi.id
LEFT OUTER JOIN bylines bl2
ON bl2.id = ios.bylineid
LEFT OUTER JOIN lookup_sourcetypes lst
ON lst.id = ios.sourcetypeid
WHERE p.id = #profileID
AND b.statusid IN ( 6, 7 )
AND bah.batchactionid = 6
AND i.statusid = 2
AND i.isrelevant = 1
when looking at the execution plan I can see an step which is costing 42%. Is there any way I could get this to a lower threshold or any way that I can improve the performance of the whole query.
Remove the profiles table as it is not needed and change the WHERE clause to
WHERE PR.profileid = #profileID
You have a left outer join on the batchactionhistory table but also have a condition in your WHERE clause which turns it back into an inner join. Change you code to this:
LEFT OUTER JOIN batchactionhistory bah
ON b.id = bah.batchid
AND bah.batchactionid = 6
You don't need the batches table as it is used to join other tables which could be joined directly and to show the id in you SELECT which is also available in other tables. Make the following changes:
i.batchidid AS BatchNo,
LEFT OUTER JOIN batchactionhistory bah
ON i.batchidid = bah.batchid
Are any of the fields that are used in joins or the WHERE clause from tables that contain large amounts of data but are not indexed. If so try adding an index on at time to the largest table.
Do you need every field in the result - if you could loose one or to you maybe could reduce the number of tables further.
First, if this is not a stored procedure, make it one. That's a lot of text for sql server to complile.
Next, my experience is that "worst practices" are occasionally a good idea. Specifically, I have been able to improve performance by splitting large queries into a couple or three small ones and assembling the results.
If this query is associated with a .net, coldfusion, java, etc application, you might be able to do the split/re-assemble in your application code. If not, a temporary table might come in handy.

Query faster with top attribute

Why is this query faster in SQL Server 2008 R2 (Version 10.50.2806.0)
SELECT
MAX(AtDate1),
MIN(AtDate2)
FROM
(
SELECT TOP 1000000000000
at.Date1 AS AtDate1,
at.Date2 AS AtDate2
FROM
dbo.tab1 a
INNER JOIN
dbo.tab2 at
ON
a.id = at.RootId
AND CAST(GETDATE() AS DATE) BETWEEN at.Date1 AND at.Date2
WHERE
a.Number = 223889
)B
then
SELECT
MAX(AtDate1),
MIN(AtDate2)
FROM
(
SELECT
at.Date1 AS AtDate1,
at.Date2 AS AtDate2
FROM
dbo.tab1 a
INNER JOIN
dbo.tab2 at
ON
a.id = at.RootId
AND CAST(GETDATE() AS DATE) BETWEEN at.Date1 AND at.Date2
WHERE
a.Number = 223889
)B
?
The second statement with the TOP attribute is six times faster.
The count(*) of the inner subquery is 9280 rows.
Can I use a HINT to declare that SQL Server optimiser make it right?
I see you've now posted the plans. Just luck of the draw.
Your actual query is a 16 table join.
SELECT max(atDate1) AS AtDate1,
min(atDate2) AS AtDate2,
max(vtDate1) AS vtDate1,
min(vtDate2) AS vtDate2,
max(bgtDate1) AS bgtDate1,
min(bgtDate2) AS bgtDate2,
max(lftDate1) AS lftDate1,
min(lftDate2) AS lftDate2,
max(lgtDate1) AS lgtDate1,
min(lgtDate2) AS lgtDate2,
max(bltDate1) AS bltDate1,
min(bltDate2) AS bltDate2
FROM (SELECT TOP 100000 at.Date1 AS atDate1,
at.Date2 AS atDate2,
vt.Date1 AS vtDate1,
vt.Date2 AS vtDate2,
bgt.Date1 AS bgtDate1,
bgt.Date2 AS bgtDate2,
lft.Date1 AS lftDate1,
lft.Date2 AS lftDate2,
lgt.Date1 AS lgtDate1,
lgt.Date2 AS lgtDate2,
blt.Date1 AS bltDate1,
blt.Date2 AS bltDate2
FROM dbo.Tab1 a
INNER JOIN dbo.Tab2 at
ON a.id = at.Tab1Id
AND cast(Getdate() AS DATE) BETWEEN at.Date1 AND at.Date2
INNER JOIN dbo.Tab5 v
ON v.Tab1Id = a.Id
INNER JOIN dbo.Tab16 g
ON g.Tab5Id = v.Id
INNER JOIN dbo.Tab3 vt
ON v.id = vt.Tab5Id
AND cast(Getdate() AS DATE) BETWEEN vt.Date1 AND vt.Date2
LEFT OUTER JOIN dbo.Tab4 vk
ON v.id = vk.Tab5Id
LEFT OUTER JOIN dbo.VerkaufsTab3 vkt
ON vk.id = vkt.Tab4Id
LEFT OUTER JOIN dbo.Plu p
ON p.Tab4Id = vk.Id
LEFT OUTER JOIN dbo.Tab15 bg
ON bg.Tab5Id = v.Id
LEFT OUTER JOIN dbo.Tab7 bgt
ON bgt.Tab15Id = bg.Id
AND cast(Getdate() AS DATE) BETWEEN bgt.Date1 AND bgt.Date2
LEFT OUTER JOIN dbo.Tab11 b
ON b.Tab15Id = bg.Id
LEFT OUTER JOIN dbo.Tab14 lf
ON lf.Id = b.Id
LEFT OUTER JOIN dbo.Tab8 lft
ON lft.Tab14Id = lf.Id
AND cast(Getdate() AS DATE) BETWEEN lft.Date1 AND lft.Date2
LEFT OUTER JOIN dbo.Tab13 lg
ON lg.Id = b.Id
LEFT OUTER JOIN dbo.Tab9 lgt
ON lgt.Tab13Id = lg.Id
AND cast(Getdate() AS DATE) BETWEEN lgt.Date1 AND lgt.Date2
LEFT OUTER JOIN dbo.Tab10 bl
ON bl.Tab11Id = b.Id
LEFT OUTER JOIN dbo.Tab6 blt
ON blt.Tab10Id = bl.Id
AND cast(Getdate() AS DATE) BETWEEN blt.Date1 AND blt.Date2
WHERE a.Nummer = 223889) B
On both the good and bad plans the Execution Plan shows "Reason for Early Termination of Statement Optimization" as "Time Out".
The two plans have slightly different join orders.
The only join in the plans not satisfied by an index seek is that on Tab9. This has 63,926 rows.
The missing index details in the execution plan suggest that you create the following index.
CREATE NONCLUSTERED INDEX [miising_index]
ON [dbo].[Tab9] ([Date1],[Date2])
INCLUDE ([Tab13Id])
The problematic part of the bad plan can be clearly seen in SQL Sentry Plan Explorer
SQL Server estimates that 1.349174 rows will be returned from the previous joins coming into the join on Tab9. And therefore costs the nested loops join as if it will need to execute the scan on the inside table 1.349174 times.
In fact 2,600 rows feed into that join meaning that it does 2,600 full scans of Tab9 (2,600 * 63,926 = 164,569,600 rows.)
It just so happens that on the good plan the estimated number of rows coming in to the join is 2.74319. This is still wrong by three orders of magnitude but the slightly increased estimate means SQL Server favors a hash join instead. A hash join just does one pass through Tab9
I would first try adding the missing index on Tab9.
Also/instead you might try updating the statistics on all tables involved (especially those with a date predicate such as Tab2 Tab3 Tab7 Tab8 Tab6) and see if that goes some way to correcting the huge discrepancy between estimated and actual rows on the left of the plan.
Also breaking the query up into smaller parts and materialising these into temporary tables with appropriate indexes might help. SQL Server can then use the statistics on these partial results to make better decisions for joins later in the plan.
Only as a last resort would I consider using query hints to try and force the plan with a hash join. Your options for doing that are either the USE PLAN hint in which case you dictate exactly the plan you want including all join types and orders or by stating LEFT OUTER HASH JOIN tab9 .... This second option also has the side effect of fixing all join orders in the plan. Both mean that SQL Server will be severely limited is its ability to adjust the plan with changes in data distribution.
It's hard to answer not knowing the size and structure of your tables, and not being able to see the entire execution plan. But the difference in both plans is Hash Match join for "top n" query vs Nested Loop join for the other one.
Hash Match is very resource intensive join, because the server has to prepare hash buckets in order to use it. But it becomes much more effective for big tables, while Nested Loops, comparing each row in one table to every row in another table works great for small tables, because there's no such preparation needed.
What I think is that by selecting TOP 1000000000000 rows in subquery you give the optimizer a hint that you're subquery will produce a great amount of data, so it uses Hash Match. But in fact the output is small, so Nested Loops works better.
What I just said is based on shreds of information, so please have heart criticising my answer ;).