I have written the following SQL Stored Procedure and because of all the select commands (I think) it's really slow running now the database has been populated with lots of data. Is there a way to optimise it so that it runs much quicker? Currently in an Azure S0 DB it takes around 1:40 min to process. Here's the stored procedure:
USE [cmb2SQL]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[spStockReport] #StockReportId as INT
AS
select
ProductId,
QtyCounted,
LastStockCount,
Purchases,
UnitRetailPrice,
CostPrice,
GrossProfit,
Consumed,
(Consumed * UnitRetailPrice) as ValueOfSales,
(QtyCounted * CostPrice) as StockOnHand,
StockCountId
from (
select
ProductId,
QtyCounted,
LastStockCount,
Purchases,
UnitRetailPrice,
CostPrice,
GrossProfit,
(LastStockCount + Purchases) - QtyCounted as Consumed,
StockCountId
from (
select
distinct
sci.StockCountItem_Product as ProductId,
(Select ISNULL(SUM(Qty), 0) as tmpQty from
(Select Qty from stockcountitems
join stockcounts on stockcountitems.stockcountitem_stockcount = stockcounts.id
where stockcountitem_Product = p.Id and stockcountitem_stockcount = sc.id and stockcounts.stockcount_pub = sc.StockCount_Pub
) as data
) as QtyCounted,
(Select ISNULL(SUM(Qty), 0) as LastStockCount from
(Select Qty from StockCountItems as sci
join StockCounts on sci.StockCountItem_StockCount = StockCounts.Id
join Products on sci.StockCountItem_Product = Products.id
where sci.StockCountItem_Product = p.id and sci.stockcountitem_stockcount =
(select top 1 stockcounts.id from stockcounts
join stockcountitems on stockcounts.id = stockcountitem_stockcount
where stockcountitems.stockcountitem_product = p.id and stockcounts.id < sc.id and StockCounts.StockCount_Pub = sc.StockCount_Pub
order by stockcounts.id desc)
) as data
) as LastStockCount,
(Select ISNULL(SUM(Qty * CaseSize), 0) as Purchased from
(select Qty, Products.CaseSize from StockPurchaseItems
join Products on stockpurchaseitems.stockpurchaseitem_product = products.id
join StockPurchases on stockpurchaseitem_stockpurchase = stockpurchases.id
join Periods on stockpurchases.stockpurchase_period = periods.id
where Products.id = p.Id and StockPurchases.StockPurchase_Period = sc.StockCount_Period and StockPurchases.StockPurchase_Pub = sc.StockCount_Pub) as data
) as Purchases,
sci.RetailPrice as UnitRetailPrice,
sci.CostPrice,
(select top 1 GrossProfit from Pub_Products where Pub_Products.Pub_Product_Product = p.id and Pub_Products.Pub_Product_Pub = sc.StockCount_Pub) as GrossProfit,
sc.Id as StockCountId
from StockCountItems as sci
join StockCounts as sc on sci.StockCountItem_StockCount = sc.Id
join Pubs on sc.StockCount_Pub = pubs.Id
join Periods as pd on sc.StockCount_Period = pd.Id
join Products as p on sci.StockCountItem_Product = p.Id
join Pub_Products as pp on p.Id = pp.Pub_Product_Product
Where StockCountItem_StockCount = #StockReportId and pp.Pub_Product_Pub = sc.StockCount_Pub
Group By sci.CostPrice, sci.StockCountItem_Product, sci.Qty, sc.Id, p.Id, sc.StockCount_Period, pd.Id, sci.RetailPrice, pp.CountPrice, sc.StockCount_Pub
) as data
) as final
GO
As requested here is the execution plan in XML (had to upload it to tinyupload as it exceeds the message character length):
execusionplan.xml
Schema:
Row Counts:
Table row_count
StockPurchaseItems 57511
Products 3116
StockCountItems 60949
StockPurchases 6494
StockCounts 240
Periods 30
Pub_Products 5694
Pubs 7
Without getting into the query rewrite, it's the most expensive and the last thing you should do probably. Try these steps first, one by one, and measure the impact - time, execution plan, and SET STATISTICS IO ON output. Create the baseline first for these metrics. Stop when you achieve the acceptable performance.
First of all, update statistics on relevant tables, I see some of the estimates are way off. Check the exec plan for estimated vs actual rows - any better now?
Create indexes on StockPurchaseItems(StockPurchaseItem_Product) and on StockCountItems(StockCountItem_Product, StockCountItem_StockCount). Check the execution plan, did optimizer consider using the indexes at all?
Add (include) other referenced columns to those two indexes in order to cover the query. Are they used now?
If nothing of above helped, consider breaking the query into smaller ones. Would be nice to have some real data to experiment with to be more specific.
** That "select distinct" smells real bad, are you sure the joins are all ok?
Related
I have the below query which takes a while to run, since ir_sales_summary is ~ 2 billion rows:
select c.ChainIdentifier, s.SupplierIdentifier, s.SupplierName, we.Weekend,
sum(sales_units_cy) as TY_unitSales, sum(sales_cost_cy) as TY_costDollars, sum(sales_units_ret_cy) as TY_retailDollars,
sum(sales_units_ly) as LY_unitSales, sum(sales_cost_ly) as LY_costDollars, sum(sales_units_ret_ly) as LY_retailDollars
from ir_sales_summary i
left join Chains c
on c.ChainID = i.ChainID
inner join Suppliers s
on s.SupplierID = i.SupplierID
inner join tmpWeekend we
on we.SaleDate = i.saledate
where year(i.saledate) = '2017'
group by c.ChainIdentifier, s.SupplierIdentifier, s.SupplierName, we.Weekend
(Worth noting, it takes roughly 3 hours to run since it is using a view that brings in data from a legacy service)
I'm thinking there's a way to speed up the filtering, since I just need the data from 2017. Should I be filtering from the big table (i) or be filtering from the much smaller weekending table (which gives us just the week ending dates)?
Try this. This might help, joining a static table as first table in query onto a fact/dynamic table will impact query performance i believe.
SELECT c.ChainIdentifier
,s.SupplierIdentifier
,s.SupplierName
,i.Weekend
,sum(sales_units_cy) AS TY_unitSales
,sum(sales_cost_cy) AS TY_costDollars
,sum(sales_units_ret_cy) AS TY_retailDollars
,sum(sales_units_ly) AS LY_unitSales
,sum(sales_cost_ly) AS LY_costDollars
,sum(sales_units_ret_ly) AS LY_retailDollars
FROM Suppliers s
INNER JOIN (
SELECT we
,weeekend
,supplierid
,chainid
,sales_units_cy
,sales_cost_cy
,sales_units_ret_cy
,sales_units_ly
,sales_cost_ly
,sales_units_ret_ly
FROM ir_sales_summary i
INNER JOIN tmpWeekend we
ON we.SaleDate = i.saledate
WHERE year(i.saledate) = '2017'
) i
ON s.SupplierID = i.SupplierID
INNER JOIN Chains c
ON c.ChainID = i.ChainID
GROUP BY c.ChainIdentifier
,s.SupplierIdentifier
,s.SupplierName
,i.Weekend
This query is taking 2.17 sec of time and no. of rows in output are 128000.All the source tables have approx 100k rows.But the same query some times takes more time during Production hours, which is the cause of slowness in system, as seen in Top 5 highly time taking queries in Oracle 11g. Can we optimize it further.
Note : Indexes are created for all the columns used in Where and On conditions. Also find attached query plan.
SELECT (ROWNUM)ROWNUMBER, RES.*
FROM
(SELECT PIDS,
FILENUM,
FILETYPE,
ATT,
FILESTA,
WSNAME,
CMP5.CLT_NAME AS CLIENT,
CLT_CODE,
CLTREF,
TMTYPE,
ASSOCNAME,
FFCOUNTRY,
APPLICATIONNO,
FF_APPLICATIONNO,
FILING_TRADEMARKTITLE,
FF_TRADEMARKTITLE,
QUERY_TRADEMARKTITLE,
CMP6.DOCNAME AS SEARCH_TRADEMARKTITLE,
CMP7.DOCUMENTNAME as FF_SEARCH_TRADEMARK,
CMP.CLASS AS CLASSES,
CMP4.CLASS AS CLASSESFF,
CMP2.NAME AS PROPRIETER,
CMP2.CODE AS PROPRIETERCODE,
CMP5.CNTRY AS CNTRY,
CMP5.STATE AS STATE,
CMP5.CITY AS CITY
FROM
(SELECT EXT.PID AS PIDS,
EXT.FILE_NUMBER AS FILENUM,
EXT.proceeding_assignmenttype AS FILETYPE,
EXT.ATTORNEY AS ATT,
EXT.FILE_STATUS AS FILESTA,
CURRENTWS AS WSNAME,
EXT.CLIENT_CODE AS CLT_CODE,
EXT.client_ref AS CLTREF,
EXT.FP_APP_TYPE AS TMTYPE,
EXT.ff_nameofassociate AS ASSOCNAME,
EXT.ff_nameofcountry AS FFCOUNTRY,
EXT.filing_trademarkno AS APPLICATIONNO,
EXT.ff_trademarknumber AS FF_APPLICATIONNO,
EXT.FILING_TRADEMARK AS FILING_TRADEMARKTITLE,
EXT.FF_TRADMARK AS FF_TRADEMARKTITLE,
EXT.Q_TRADEMARK AS QUERY_TRADEMARKTITLE
FROM EXT_TRADEMARK EXT
WHERE CURRENTWSSTATUS = '1'
ORDER BY EXT.FILE_NUMBER DESC
)EXTTABLE
LEFT JOIN COMPLEX_TM_CLASS CMP
ON EXTTABLE.PIDS = CMP.PROCESSINSTANCEID
LEFT JOIN COMPLEX_TM_PROPRIEATOR CMP2
ON EXTTABLE.PIDS = CMP2.PROCESSINSTANCEID
LEFT JOIN COMPLEX_TM_FF_CLASS CMP4
ON EXTTABLE.PIDS = CMP4.PROCESSINSTANCEID
LEFT JOIN FA_CLT_MSTR CMP5
ON EXTTABLE.CLT_CODE = CMP5.CLT_COD
LEFT JOIN COMPLEX_TM_SEARCH CMP6
ON EXTTABLE.PIDS = CMP6.PID
LEFT JOIN COMPLEX_TM_FF_SEARCH CMP7
ON EXTTABLE.PIDS = CMP7.PID order by FILENUM desc
) RES ;
I need to improve my query, specially the execution time.This is my query:
SELECT SQL_CALC_FOUND_ROWS p.*,v.type,v.idName,v.name as etapaName,m.name AS manager,
c.name AS CLIENT,
(SELECT SEC_TO_TIME(SUM(TIME_TO_SEC(duration)))
FROM activities a
WHERE a.projectid = p.projectid) AS worked,
(SELECT SUM(TIME_TO_SEC(duration))
FROM activities a
WHERE a.projectid = p.projectid) AS worked_seconds,
(SELECT SUM(TIME_TO_SEC(remain_time))
FROM tasks t
WHERE t.projectid = p.projectid) AS remain_time
FROM projects p
INNER JOIN users m
ON p.managerid = m.userid
INNER JOIN clients c
ON p.clientid = c.clientid
INNER JOIN `values` v
ON p.etapa = v.id
WHERE 1 = 1
ORDER BY idName
ASC
The execution time of this is aprox. 5 sec. If i remove this part: (SELECT SUM(TIME_TO_SEC(remain_time)) FROM tasks t WHERE t.projectid = p.projectid) AS remain_time
the execution time is reduced to 0.3 sec. Is there a way to get the values of the remain_time in order to reduce the exec.time ?
The SQL is invoked from PHP (if this is relevant to any proposed solution).
It sounds like you need an index on tasks.
Try adding this one:
create index idx_tasks_projectid_remaintime on tasks(projectid, remain_time);
The correlated subquery should just use the index and go much faster.
Optimizing the query as it is written would give significant performance benefits (see below). But the FIRST QUESTION TO ASK when approaching any optimization is whether you really need to see all the data - there is no filtering of the resultset implemented here. This is a HUGE impact on how you optimize a query.
Adding an index on the query above will only help if the optimizer is opening a new cursor on the tasks table for every row returned by the main query. In the absence of any filtering, it will be much faster to do a full table scan of the tasks table.
SELECT ilv.*, remaining.rtime
FROM (
SELECT p.*,v.type, v.idName, v.name as etapaName,
m.name AS manager, c.name AS CLIENT,
SEC_TO_TIME(asbq.worked) AS worked, asbq.worked AS seconds_worked,
FROM projects p
INNER JOIN users m
ON p.managerid = m.userid
INNER JOIN clients c
ON p.clientid = c.clientid
INNER JOIN `values` v
ON p.etapa = v.id
LEFT JOIN (
SELECT a.projectid, SUM(TIME_TO_SEC(duration)) AS worked
FROM activities a
GROUP BY a.projectid
) asbq
ON asbq.projectid=p.projectid
) ilv
LEFT JOIN (
(SELECT t.project_id, SUM(TIME_TO_SEC(remain_time)) as rtime
FROM tasks t
GROUP BY t.projectid) remaining
ON ilv.projectid=remaining.projectid
I am trying to join 14 tables in which few tables I need to join using left join.
With the existing data which is around 7000 records,its taking around 10 seconds to execute the below query.I am afraid what if the records are more than million.Please help me improve the performance of the below query.
CREATE proc [dbo].[GetTodaysActualInvoiceItemSoldHistory]
#fromdate datetime,
#todate datetime
as
Begin
select SDID.InvoiceDate as [Sold Date],Cust.custCompanyName as [Sold To] ,
case SQBD.TransferNo when '0' then IVM.VendorName else SQBD.TransferNo end as [Purchase From],
SQBD.BatchSellQty as SoldQty,SQID.SellPrice,
SDID.InvoiceNo as [Sales Invoice No],INV.PRInvoiceNo as [PO Invoice No],INV.PRInvoiceDate as [PO Invoice Date],
SQID.ItemDesc as [Item Description],SQID.NetPrice,SDHM.DeliveryHeaderMasterName as DeliveryHeaderName,
SQID.ItemCode as [Item Code],
SQBD.BatchNo,SQBD.ExpiryDate,SQID.Amount,
SQID.Dept_ID as Dept_ID,
Dept_Name as [Department],SQID.Catg_ID as Catg_ID,
Category_Name as [Category],SQID.Brand_ID as Brand_ID,
BrandName as BrandName, SQID.Manf_Id as Manf_Id,
Manf.ManfName as [Manufacturer],
STM.TaxName, SQID.Tax_ID as Tax_ID,
INV.VendorID as VendorID,
SQBD.ItemID,SQM.Isdeleted,
SDHM.DeliveryHeaderMasterID,Cust.CustomerMasterID
from SD_QuotationMaster SQM
inner join SD_InvoiceDetails SDID on SQM.QuoteID = SDID.QuoteID
inner join SD_QuoteItemDetails SQID on SDID.QuoteID = SQID.QuoteID
inner join SD_QuoteBatchDetails SQBD on SDID.QuoteID = SQBD.QuoteID and SQID.ItemID=SQBD.ItemID
inner join INV_ProductInvoice INV on SQBD.InvoiceID=INV.ProductInvoiceID
inner jOIN INV_VendorMaster IVM ON INV.VendorID = IVM.VendorID
inner jOIN Sys_TaxMaster STM ON SQID.Tax_ID = STM.Tax_ID
inner join Cust_CustomerMaster Cust on SQM.CustomerMasterID = Cust.CustomerMasterID
left jOIN INV_DeptartmentMaster Dept ON SQID.Dept_ID = Dept.Dept_ID
left jOIN INV_BrandMaster BRD ON SQID.Brand_ID = BRD.Brand_ID
left jOIN INV_ManufacturerMaster Manf ON SQID.Manf_Id = Manf.Manf_Id
left join INV_CategoryMaster CAT ON SQID.Catg_ID = CAT.Catg_ID
left join SLRB_DeliveryCustomerMaster SDCM on SQM.CustomerMasterID=SDCM.CustomerMasterID and SQM.DeliveryHeaderMasterID=SDCM.DeliveryHeaderMasterID
left join SLRB_DeliveryHeaderMaster SDHM on SDCM.DeliveryHeaderMasterID=SDHM.DeliveryHeaderMasterID
where (SQM.IsDeleted=0) and SQBD.BatchSellQty > 0
and SDID.InvoiceDate between #fromdate and #todate
order by ItemDesc
End
Only the below tables contain more data while other tables have records less than 20
InvoiceDetails, QuoteMaster, QuoteItemDetails, QuoteBatchDetails ProductInvoice
Below is the link for execution plan
http://jmp.sh/CSZc2x2
Thanks.
Let's start with an obvious error:
(isnull(SQBD.BatchSellQty,0) > 0)
That one is not indexable, so it should not happen. Seriously, BatchSellQty should not be unknown (nullable) in most cases, or you better handle null properly. That field should be indexed and I am not sure I would like that with an isNull - there are likely tons of batches. Also note that a filtered index (condition >0) may work here.
Second, check that you have all proper indices and the execution plan makes sense.
Thids, you have to test with a ton of data. Index statistics may make a difference. Check where the time is spent - it may be tempdb in which case you really need a good tempdb IO speed.... and it is not realted to the input side.
You can try to use query hints to help SQL Server optimizer build a optimal query execution plan. For example, you can force the order of tables will be joined, using FORCE ORDER statement. If you order your tables in order that joins with minimum result size at each step, query will execute faster (may be, needs to try). Example:
We need to A join B join C
If A join B = 2000 records x 1000 records = ~400 records (we suspect this result)
And A join C = 2000 records x 10 records = ~3 records (and this)
And B join C = 1000 records x 10 records = 10 000 records (and this)
In this case optimal order will be
A join C join B = ~3 records x 1000 records = ~3000 records
I have a hard time with query optimization, currently I'm very close to the point of database redesign. And the stackoverflow is my last hope. I don't think that just showing you the query is enough so I've linked not only database script but also attached database backup in case you don't want to generate the data by hand
Here you can find both the script and the backup
The problems start when you try to do the following...
exec LockBranches #count=64,#lockedBy='034C0396-5C34-4DDA-8AD5-7E43B373AE5A',#lockedOn='2011-07-01 01:29:43.863',#unlockOn='2011-07-01 01:32:43.863'
The main problems occur in this part:
UPDATE B
SET B.LockedBy = #lockedBy,
B.LockedOn = #lockedOn,
B.UnlockOn = #unlockOn,
B.Complete = 1
FROM
(
SELECT TOP (#count) B.LockedBy, B.LockedOn, B.UnlockOn, B.Complete
FROM Objectives AS O
INNER JOIN Generations AS G ON G.ObjectiveID = O.ID
INNER JOIN Branches AS B ON B.GenerationID = G.ID
INNER JOIN
(
SELECT SB.BranchID AS BranchID, SUM(X.SuitableProbes) AS SuitableProbes
FROM SpicieBranches AS SB
INNER JOIN Probes AS P ON P.SpicieID = SB.SpicieID
INNER JOIN
(
SELECT P.ID, 1 AS SuitableProbes
FROM Probes AS P
/* ----> */ INNER JOIN Results AS R ON P.ID = R.ProbeID /* SSMS Estimated execution plan says this operation is the roughest */
GROUP BY P.ID
HAVING COUNT(R.ID) > 0
) AS X ON P.ID = X.ID
GROUP BY SB.BranchID
) AS X ON X.BranchID = B.ID
WHERE
(O.Active = 1)
AND (B.Sealed = 0)
AND (B.GenerationNo < O.BranchGenerations)
AND (B.LockedBy IS NULL OR DATEDIFF(SECOND, B.UnlockOn, GETDATE()) > 0)
AND (B.Complete = 1 OR X.SuitableProbes = O.BranchSize * O.EstimateCount * O.ProbeCount)
) AS B
EDIT: Here are the amounts of rows in each table:
Spicies 71536
Results 10240
Probes 10240
SpicieBranches 4096
Branches 256
Estimates 5
Generations 1
Versions 1
Objectives 1
Somebody else might be able to explain better than I can why this is much quicker. Experience tells me when you have a bunch of queries that collectively run slow together but should be quick in their individual parts then its worth trying a temporary table.
This is much quicker
ALTER PROCEDURE LockBranches
-- Add the parameters for the stored procedure here
#count INT,
#lockedOn DATETIME,
#unlockOn DATETIME,
#lockedBy UNIQUEIDENTIFIER
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON
--Create Temp Table
SELECT SpicieBranches.BranchID AS BranchID, SUM(X.SuitableProbes) AS SuitableProbes
INTO #BranchSuitableProbeCount
FROM SpicieBranches
INNER JOIN Probes AS P ON P.SpicieID = SpicieBranches.SpicieID
INNER JOIN
(
SELECT P.ID, 1 AS SuitableProbes
FROM Probes AS P
INNER JOIN Results AS R ON P.ID = R.ProbeID
GROUP BY P.ID
HAVING COUNT(R.ID) > 0
) AS X ON P.ID = X.ID
GROUP BY SpicieBranches.BranchID
UPDATE B SET
B.LockedBy = #lockedBy,
B.LockedOn = #lockedOn,
B.UnlockOn = #unlockOn,
B.Complete = 1
FROM
(
SELECT TOP (#count) Branches.LockedBy, Branches.LockedOn, Branches.UnlockOn, Branches.Complete
FROM Objectives
INNER JOIN Generations ON Generations.ObjectiveID = Objectives.ID
INNER JOIN Branches ON Branches.GenerationID = Generations.ID
INNER JOIN #BranchSuitableProbeCount ON Branches.ID = #BranchSuitableProbeCount.BranchID
WHERE
(Objectives.Active = 1)
AND (Branches.Sealed = 0)
AND (Branches.GenerationNo < Objectives.BranchGenerations)
AND (Branches.LockedBy IS NULL OR DATEDIFF(SECOND, Branches.UnlockOn, GETDATE()) > 0)
AND (Branches.Complete = 1 OR #BranchSuitableProbeCount.SuitableProbes = Objectives.BranchSize * Objectives.EstimateCount * Objectives.ProbeCount)
) AS B
END
This is much quicker with an average execution time of 54ms compared to 6 seconds with the original one.
EDIT
Had a look and combined my ideas with those from RBarryYoung's solution. If you use the following to create the temporary table
SELECT SB.BranchID AS BranchID, COUNT(*) AS SuitableProbes
INTO #BranchSuitableProbeCount
FROM SpicieBranches AS SB
INNER JOIN Probes AS P ON P.SpicieID = SB.SpicieID
WHERE EXISTS(SELECT * FROM Results AS R WHERE R.ProbeID = P.ID)
GROUP BY SB.BranchID
then you can get this down to 15ms which is 400x better than we started with. Looking at the execution plan shows that there is a table scan happening on the temp table. Normally you avoid table scans as best you can but for 128 rows (in this case) it is quicker than whatever it was doing before.
This is basically a complete guess here, but in times past I've found that joining onto the results of a sub-query can be horrifically slow. That is, the subquery was being evaluated way too many times when it really didn't need to.
The way around this was to move the subqueries into CTEs and to join onto those instead. Good luck!
It appears the join on the two uniqueidentifier columns are the source of the problem. One is a clustered index, the other non-clustered on the (FK table). Good that there are indexes on them. Unfortunately guids are notoriously poor performing when joining with large numbers of rows.
As troubleshooting steps:
what state are the indexes in? When was the last time the statistics were updated?
how performant is that subquery onto itself, when executed adhoc? i.e. when you run this statement by itself, how fast does the resultset return? acceptable?
after rebuilding the 2 indexes, and updating statistics, is there any measurable difference?
SELECT P.ID, 1 AS SuitableProbes FROM Probes AS P
INNER JOIN Results AS R ON P.ID = R.ProbeID
GROUP BY P.ID HAVING COUNT(R.ID) > 0
The following runs about 15x faster on my system:
UPDATE B
SET B.LockedBy = #lockedBy,
B.LockedOn = #lockedOn,
B.UnlockOn = #unlockOn,
B.Complete = 1
FROM
(
SELECT TOP (#count) B.LockedBy, B.LockedOn, B.UnlockOn, B.Complete
FROM Objectives AS O
INNER JOIN Generations AS G ON G.ObjectiveID = O.ID
INNER JOIN Branches AS B ON B.GenerationID = G.ID
INNER JOIN
(
SELECT SB.BranchID AS BranchID, COUNT(*) AS SuitableProbes
FROM SpicieBranches AS SB
INNER JOIN Probes AS P ON P.SpicieID = SB.SpicieID
WHERE EXISTS(SELECT * FROM Results AS R WHERE R.ProbeID = P.ID)
GROUP BY SB.BranchID
) AS X ON X.BranchID = B.ID
WHERE
(O.Active = 1)
AND (B.Sealed = 0)
AND (B.GenerationNo < O.BranchGenerations)
AND (B.LockedBy IS NULL OR DATEDIFF(SECOND, B.UnlockOn, GETDATE()) > 0)
AND (B.Complete = 1 OR X.SuitableProbes = O.BranchSize * O.EstimateCount * O.ProbeCount)
) AS B
Insertion of sub query into local temporary table
SELECT SB.BranchID AS BranchID, SUM(X.SuitableProbes) AS SuitableProbes
into #temp FROM SpicieBranches AS SB
INNER JOIN Probes AS P ON P.SpicieID = SB.SpicieID
INNER JOIN
(
SELECT P.ID, 1 AS SuitableProbes
FROM Probes AS P
/* ----> */ INNER JOIN Results AS R ON P.ID = R.ProbeID /* SSMS Estimated execution plan says this operation is the roughest */
GROUP BY P.ID
HAVING COUNT(R.ID) > 0
) AS X ON P.ID = X.ID
GROUP BY SB.BranchID
The below query shows the partial joins with the corresponding table instead of complete!!
UPDATE B
SET B.LockedBy = #lockedBy,
B.LockedOn = #lockedOn,
B.UnlockOn = #unlockOn,
B.Complete = 1
FROM
(
SELECT TOP (#count) B.LockedBy, B.LockedOn, B.UnlockOn, B.Complete
From
(
SELECT ID, BranchGenerations, (BranchSize * EstimateCount * ProbeCount) as MultipliedFactor
FROM Objectives AS O WHERE (O.Active = 1)
)O
INNER JOIN Generations AS G ON G.ObjectiveID = O.ID
Inner Join
(
Select Sealed, GenerationNo, LockedBy, UnlockOn, ID, Complete
From Branches
Where B.Sealed = 0 AND (B.LockedBy IS NULL OR DATEDIFF(SECOND, B.UnlockOn, GETDATE()) > 0)
)B ON B.GenerationID = G.ID
INNER JOIN
(
Select * from #temp
) AS X ON X.BranchID = B.ID
WHERE
AND (B.GenerationNo < O.BranchGenerations)
AND (B.Complete = 1 OR X.SuitableProbes = O.MultipliedFactor)
) AS B