I have a query that takes 20 min to executed... I remember in one project we used /*+ PARALLEL(T,8) / or we would use the with clause and /+ materialize */ and it would make the query responds time really fast withing seconds. How can I do this to this query?
select count(*) from (
select hdr.ACCESS_IND,
hdr.SID,
hdr.CLLI,
hdr.DA,
hdr.TAPER_CODE,
hdr.CFG_TYPE as CFG_TYPE,
hdr.IP_ADDR,
hdr.IOS_VERSION,
hdr.ADMIN_STATE,
hdr.WIRE_CENTER,
substr(hdr.SID_IO_PRI, 1, 8) PRI_IO_CLLI,
substr(hdr.SID_IO_SEC, 1, 8) SEC_IO_CLLI,
hdr.VHO_CLLI ,
hdr.CFG_TYPE ,
-- dtl.MULTIPURPOSE_IND,
lkup.code3 as shelf_type
from RPT_7330_HDR hdr
INNER JOIN RPT_7330_DTL dtl on hdr.EID = dtl.EID
INNER JOIN CODE_LKUP2 lkup ON LKUP.CODE1 = hdr.ACCESS_IND
where LKUP.CATEGORY='ACCESS_MAPPING' and hdr.DT_MODIFIED = (select DT_MODIFIED
from LS_DT_MODIFIED
where NAME = 'RPT_7330_HDR')) n;
Try this, might be faster:
select count(*)
from RPT_7330_HDR hdr
JOIN LS_DT_MODIFIED LS ON LS.NAME = 'RPT_7330_HDR' AND hdr.DT_MODIFIED = LS.DT_MODIFIED
JOIN RPT_7330_DTL dtl on hdr.EID = dtl.EID
JOIN CODE_LKUP2 lkup ON LKUP.CODE1 = hdr.ACCESS_IND AND LKUP.CATEGORY='ACCESS_MAPPING'
The SQL engine can optimize JOINS to be parallel if you have the right indexes and such. It is often able to optimize joins when it can't optimize sub-queries.
If you want data then
SELECT /*+ PARALLEL(DTL,4) */
HDR.ACCESS_IND,
HDR.SID,
HDR.CLLI,
HDR.DA,
HDR.TAPER_CODE,
HDR.CFG_TYPE AS CFG_TYPE,
HDR.IP_ADDR,
HDR.IOS_VERSION,
HDR.ADMIN_STATE,
HDR.WIRE_CENTER,
SUBSTR ( HDR.SID_IO_PRI, 1, 8 ) PRI_IO_CLLI,
SUBSTR ( HDR.SID_IO_SEC, 1, 8 ) SEC_IO_CLLI,
HDR.VHO_CLLI,
HDR.CFG_TYPE,
LKUP.CODE3 AS SHELF_TYPE
FROM
RPT_7330_HDR HDR INNER JOIN RPT_7330_DTL DTL ON HDR.EID = DTL.EID
INNER JOIN CODE_LKUP2 LKUP ON LKUP.CODE1 = HDR.ACCESS_IND
INNER JOIN LS_DT_MODIFIED ON HDR.DT_MODIFIED = DT_MODIFIED
WHERE
LKUP.CATEGORY = 'ACCESS_MAPPING'
AND NAME = 'RPT_7330_HDR';
If you want count then
SELECT /*+ PARALLEL(DTL,4) */
COUNT (*)
FROM
RPT_7330_HDR HDR INNER JOIN RPT_7330_DTL DTL ON HDR.EID = DTL.EID
INNER JOIN CODE_LKUP2 LKUP ON LKUP.CODE1 = HDR.ACCESS_IND
INNER JOIN LS_DT_MODIFIED ON HDR.DT_MODIFIED = DT_MODIFIED
WHERE
LKUP.CATEGORY = 'ACCESS_MAPPING'
AND NAME = 'RPT_7330_HDR';
Note: Hint used on DTL table which is taking more cost for FTS. Number 4 means query fired on 8 CPU's parallely. Identify your pain points from query plan and decide on your hints for parallel or any other. You can also use parallel hint on multiple tables /*+ PARALLEL(table1 4) PARALLEL(table2 4) PARALLEL(table3 4) PARALLEL(table4 4)*/ . Also This works only on Enterprise edition and not on standard edition.
SELECT /*+ PARALLEL */ ...
You do not want to use a magic number and you do not want to list any objects.
A degree of parallelism of 8 is probably too high on your laptop, yet too low on your production server. If the query simply specifies PARALLEL, Oracle will automatically determine the DOP if it's configured for automatic parallelism. Or it will default to the number of CPUs * threads per CPU * number of instances. Judging by all the manuals, features, and whitepapers, Oracle has put a lot of thought into the degree of parallelism. If you're not going to benchmark it you should trust the defaults.
In 11gR2, when you don't list any objects you use statement level parallelism instead of object level parallelism. There's not need to try to determine exactly
which objects and access paths needs to be parallelized. If you change the query or alias names later there's no chance of the hint not working.
Here's a quick introduction to the parallel hint. There's also the
Using Parallel Execution chapter of the VLDB and Partitioning Guide.
Related
I am running simple select script, which inner join with other 3 table . all the tables are big ( lots of data ) its taking around 20 sec to run. want to optimized it.
I tried to used nolock , but not much deference
SELECT RR.ReportID,
RR.RequestFormat,
RRP.SequenceNumber,
RRP.ParameterName,
RRP.ParameterValue
CASE WHEN RP.ParameterLabelOvrrd IS NULL THEN P.ParameterLabel ELSE .ParameterLabelOvrrd END AS ParameterLabelChosen,
RRP.ParameterValueEntered
FROM ReportRequestParameters AS RRP WITH (NOLOCK)
INNER JOIN ReportRequests AS RR WITH (NOLOCK) ON RRP.RequestID = RR.RequestID
INNER JOIN ReportParameter AS RP WITH (NOLOCK) ON RP.ReportID = RR.ReportID
AND RP.SequenceNumber = RRP.SequenceNumber
INNER JOIN Parameter AS P WITH (NOLOCK) ON P.ParameterID = RP.ParameterID
WHERE RRP.RequestID = '2226765'
ORDER BY SequenceNumber;
Please advice.
This is your query:
SELECT RR.ReportID, RR.RequestFormat, RRP.SequenceNumber,
RRP.ParameterName, RRP.ParameterValue
COALESCE(RP.ParameterLabelOvrrd, P.ParameterLabel) as ParameterLabelChosen,
RRP.ParameterValueEntered
FROM ReportRequestParameters RRP JOIN
ReportRequests RR
ON RRP.RequestID = RR.RequestID JOIN
ReportParameter RP
ON RP.ReportID = RR.ReportID AND
RP.SequenceNumber = RRP.SequenceNumber JOIN
Parameter P
ON P.ParameterID = RP.ParameterID
WHERE RRP.RequestID = 2226765
ORDER BY RRP.SequenceNumber;
I have removed the single quotes on 2226765, assuming that the id is a number. Mixing types can impede the optimizer.
Then, I recommend an index on ReportRequestParameters(RequestID, SequenceNumber). I assume the other tables have indexes on the appropriate columns, but these are:
ReportRequests(RequestID, ReportID, SequenceNumber)
ReportParameter(ReportID, SequenceNumber, ParameterID)
Parameter(ParameterID)
I strongly advise you not to use nolock, unless you know what you are doing. Aaron Bertrand has a good blog post on this subject.
I would suggest running with the execution plan turned on and see if SSMS can advise you on additional indexing.
Other than that your query looks straight-forward, nothing code wise that is going to help make it faster, other than perhaps getting rid of the case statement and definitely getting rid of the NOLOCK statements.
I have a query against a table that contains like 2 million rows using linked server.
Select * from OPENQUERY(LinkedServerName,
'SELECT
PV.col1
,PV.col2
,PV.col3
,VTR.col1
,CTR.col1
,PSR.col1
FROM
LinkedDbName.dbo.tbl1 PV
INNER JOIN LinkedDbName.dbo.tbl2 VTR
ON PV.col_id = VTR.col_id
INNER JOIN LinkedDbName.dbo.tbl3 CTR
ON PV.col_id = CTR.col_id
INNER JOIN LinkedDbName.dbo.tbl4 PSR
ON PV.col_id = PSR.col_id
WHERE
PV.col_id = ''80C53C9B-6272-11DA-BB34-000E0C7F3ED2''')
That query results into 365 rows and is executed within 0 second.
However when I make that query into a view it runs for about minimum of 20 seconds and sometimes it reaches to 40 seconds tops.
Here's my create view script
CREATE VIEW [dbo].[myview]
AS
Select * from OPENQUERY(LinkedServerName,
'SELECT
PV.col1
,PV.col2
,PV.col3
,VTR.col1
,CTR.col1
,PSR.col1
FROM
LinkedDbName.dbo.tbl1 PV
INNER JOIN LinkedDbName.dbo.tbl2 VTR
ON PV.col_id = VTR.col_id
INNER JOIN LinkedDbName.dbo.tbl3 CTR
ON PV.col_id = CTR.col_id
INNER JOIN LinkedDbName.dbo.tbl4 PSR
ON PV.col_id = PSR.col_id')
then
Select * from myview where PV.col_id = '80C53C9B-6272-11DA-BB34-000E0C7F3ED2'
Any idea ? Thanks !
Your queries are quite different. In the first, the where clause is part of the SQL statement passed to OPENQUERY(). This has two important effects:
The amount of data returned is much smaller, only being the rows that match the condition.
The query can be optimized with the WHERE clause.
If you need to share the table, I might suggest that you make a copy on the local server -- either using replication or scheduling a job to copy it over.
I am baffled as to why selecting my SQL View is so slow when using a table alias (25 seconds) but runs so much faster when the alias is removed (2 seconds)
-this query takes 25 seconds.
SELECT [Extent1].[Id] AS [Id],
[Extent1].[ProjectId] AS [ProjectId],
[Extent1].[ProjectWorkOrderId] AS [ProjectWorkOrderId],
[Extent1].[Project] AS [Project],
[Extent1].[SubcontractorId] AS [SubcontractorId],
[Extent1].[Subcontractor] AS [Subcontractor],
[Extent1].[ValuationNumber] AS [ValuationNumber],
[Extent1].[WorksOrderName] AS [WorksOrderName],
[Extent1].[NewGross],
[Extent1].[CumulativeGross],
[Extent1].[CreateByName] AS [CreateByName],
[Extent1].[CreateDate] AS [CreateDate],
[Extent1].[FinalDateForPayment] AS [FinalDateForPayment],
[Extent1].[CreateByEmail] AS [CreateByEmail],
[Extent1].[Deleted] AS [Deleted],
[Extent1].[ValuationStatusCategoryId] AS [ValuationStatusCategoryId]
FROM [dbo].[ValuationsTotal] AS [Extent1]
-this query takes 2 seconds.
SELECT [Id],
[ProjectId],
[Project],
[SubcontractorId],
[Subcontractor],
[NewGross],
[ProjectWorkOrderId],
[ValuationNumber],
[WorksOrderName],
[CreateByName],
[CreateDate],
[CreateByEmail],
[Deleted],
[ValuationStatusCategoryId],
[FinalDateForPayment],
[CumulativeGross]
FROM [dbo].[ValuationsTotal]
this is my SQL View code -
WITH ValuationTotalsTemp(Id, ProjectId, Project, SubcontractorId, Subcontractor, WorksOrderName, NewGross, ProjectWorkOrderId, ValuationNumber, CreateByName, CreateDate, CreateByEmail, Deleted, ValuationStatusCategoryId, FinalDateForPayment)
AS (SELECT vi.ValuationId AS Id,
v.ProjectId,
p.NAME,
b.Id AS Expr1,
b.NAME AS Expr2,
wo.OrderNumber,
SUM(vi.ValuationQuantity * pbc.BudgetRate) AS 'NewGross',
sa.ProjectWorkOrderId,
v.ValuationNumber,
up.FirstName + ' ' + up.LastName AS Expr3,
v.CreateDate,
up.Email,
v.Deleted,
v.ValuationStatusCategoryId,
sa.FinalDateForPayment
FROM dbo.ValuationItems AS vi
INNER JOIN dbo.ProjectBudgetCosts AS pbc
ON vi.ProjectBudgetCostId = pbc.Id
INNER JOIN dbo.Valuations AS v
ON vi.ValuationId = v.Id
INNER JOIN dbo.ProjectSubcontractorApplications AS sa
ON sa.Id = v.ProjectSubcontractorApplicationId
INNER JOIN dbo.Projects AS p
ON p.Id = v.ProjectId
INNER JOIN dbo.ProjectWorkOrders AS wo
ON wo.Id = sa.ProjectWorkOrderId
INNER JOIN dbo.ProjectSubcontractors AS sub
ON sub.Id = wo.ProjectSubcontractorId
INNER JOIN dbo.Businesses AS b
ON b.Id = sub.BusinessId
INNER JOIN dbo.UserProfile AS up
ON up.Id = v.CreateBy
WHERE ( vi.Deleted = 0 )
AND ( v.Deleted = 0 )
GROUP BY vi.ValuationId,
v.ProjectId,
p.NAME,
b.Id,
b.NAME,
wo.OrderNumber,
sa.ProjectWorkOrderId,
v.ValuationNumber,
up.FirstName + ' ' + up.LastName,
v.CreateDate,
up.Email,
v.Deleted,
v.ValuationStatusCategoryId,
sa.FinalDateForPayment)
SELECT Id,
ProjectId,
Project,
SubcontractorId,
Subcontractor,
NewGross,
ProjectWorkOrderId,
ValuationNumber,
WorksOrderName,
CreateByName,
CreateDate,
CreateByEmail,
Deleted,
ValuationStatusCategoryId,
FinalDateForPayment,
(SELECT SUM(NewGross) AS Expr1
FROM ValuationTotalsTemp AS tt
WHERE ( ProjectWorkOrderId = t.ProjectWorkOrderId )
AND ( t.ValuationNumber >= ValuationNumber )
GROUP BY ProjectWorkOrderId) AS CumulativeGross
FROM ValuationTotalsTemp AS t
Any ideas why this is?
The SQL query runs with table alias as this is generated from Entity Framework so I have no way of changing this. I will need to modify my SQL view to be able to handle the table alias without affecting performance.
The execution plans are very different.
The slow one has a part that leaps out as being problematic. It estimates a single row will be input to a nested loops join and result in a single scan of ValuationItems. In practice it ends up performing more than 1,000 such scans.
Estimated
Actual
SQL Server 2014 introduced a new cardinality estimator. Your fast plan is using it. This is shown in the XML as CardinalityEstimationModelVersion="120" Your slow plan isn't (CardinalityEstimationModelVersion="70").
So it looks as though in this case the assumptions used by the new estimator give you a better plan.
The reason for the difference is probably as the fast one is running cross database (references [ProbeProduction].[dbo].[ValuationsTotal]) and presumably the database you are executing it from has compatility level of 2014 so automatically gets the new CardinalityEstimator.
The slow one is executing in the context of ProbeProduction itself and I assume the compatibility level of that database must be < 2014 - so you are defaulting to the legacy cardinality estimator.
You can use OPTION (QUERYTRACEON 2312) to get the slow query to use the new cardinality estimator (changing the database compatibility mode to globally alter the behaviour shouldn't be done without careful testing of existing queries as it can cause regressions as well as improvements).
Alternatively you could just try and tune the query working within the limits of the legacy CE. Perhaps adding join hints to encourage it to use something more akin to the faster plan.
The two queries are different (column order!). It is reasonable to assume the first query uses an index and is therefore much faster. I doubt it has anything to do with the aliassing.
For grins would take out the where and give this a try?
I might be doing a bunch of loop joins and filtering at the end
This might get it to filter up front
FROM dbo.ValuationItems AS vi
INNER JOIN dbo.Valuations AS v
ON vi.ValuationId = v.Id
AND vi.Deleted = 0
AND v.Deleted = 0
-- other joins
-- NO where
If you have a lot of loop joins going on then try inner hash join (on all)
I have a complex query:
SELECT DISTINCT ON (delivery.id)
delivery.id, dl_processing.pid
FROM mailer.mailer_message_recipient_rel AS delivery
JOIN mailer.mailer_message AS message ON delivery.message_id = message.id
JOIN mailer.mailer_message_recipient_rel_log AS dl_processing ON dl_processing.rel_id = delivery.id AND dl_processing.status = 1000
-- LEFT JOIN mailer.mailer_recipient AS r ON delivery.email = r.email
JOIN mailer.mailer_mailing AS mailing ON message.mailing_id = mailing.id
WHERE
NOT EXISTS (SELECT dl_finished.id FROM mailer.mailer_message_recipient_rel_log AS dl_finished WHERE dl_finished.rel_id = delivery.id AND dl_finished.status <> 1000) AND
dl_processing.date <= NOW() - (36000 * INTERVAL '1 second') AND
NOT EXISTS (SELECT ml.id FROM mailer.mailer_message_log AS ml WHERE ml.message_id = message.id) AND
-- (r.times_bounced < 5 OR r.times_bounced IS NULL) AND
NOT EXISTS (SELECT ur.id FROM mailer.mailer_unsubscribed_recipient AS ur WHERE ur.email = delivery.email AND ur.list_id = mailing.list_id)
ORDER BY delivery.id, dl_processing.id DESC
LIMIT 1000;
It is running very slowly, and the reason seems to be that Postgres is consistently avoiding using merge joins in its query plan despite me having all the indices that I would need for this. It looks really depressing:
http://explain.depesz.com/s/tVY
http://i.stack.imgur.com/Myw4R.png
Why would this happen? How do I troubleshoot such an issue?
UPD: with #wildplasser's help I have reworked the query to fix performance (while changing its semantics somewhat):
SELECT delivery.id, dl_processing.pid
FROM mailer.mailer_message_recipient_rel AS delivery
JOIN mailer.mailer_message AS message ON delivery.message_id = message.id
JOIN mailer.mailer_message_recipient_rel_log AS dl_processing ON dl_processing.rel_id = delivery.id AND dl_processing.status in (1000, 2, 5) AND dl_processing.date <= NOW() - (36000 * INTERVAL '1 second')
LEFT JOIN mailer.mailer_recipient AS r ON delivery.email = r.email
WHERE
(r.times_bounced < 5 OR r.times_bounced IS NULL) AND
NOT EXISTS (SELECT dl_other.id FROM mailer.mailer_message_recipient_rel_log AS dl_other WHERE dl_other.rel_id = delivery.id AND dl_other.id > dl_processing.id) AND
NOT EXISTS (SELECT ml.id FROM mailer.mailer_message_log AS ml WHERE ml.message_id = message.id) AND
NOT EXISTS (SELECT ur.id FROM mailer.mailer_unsubscribed_recipient AS ur JOIN mailer.mailer_mailing AS mailing ON message.mailing_id = mailing.id WHERE ur.email = delivery.email AND ur.list_id = mailing.list_id)
ORDER BY delivery.id
LIMIT 1000
It now runs well, but the query plan still sports these horrible nested loop joins <_<:
http://explain.depesz.com/s/MTo3
I would still like to know why that is.
The reason is that Postgres is actually doing the right thing, and I suck at math. Suppose table A has N rows, and table B has M rows, and they are being joined via a column that they both have a B-tree index for. Then the following is true:
Nested loop join's time complexity is not O(MN), like I naively thought, but O(M log N) or O(N log M), depending on which table is scanned linearly. If both are scanned by an index, we get O(M log M log N) or O(N log M log N), respectively. But since this is only required if a specific order of the rows is needed for yet another join or due to the ORDER clause, as we'll see it's not a bad deal at all.
Merge join's time complexity is O(M log M + N log N), which means that it loses to the nested loop join, provided that the asymptotic proportionality coefficients are the same, and AFAIK they should both be equal to 1 in most implementations. Since both tables must be iterated by the same index in the same direction, if different order is required, an additional sort is required, which easily makes the complexity worse than in the case of the nested loop sort.
So basically despite being associated with the merge sort, which we all love, merge join almost always sucks.
The reason why my first query was so slow was because it had to perform sort before applying limit, and it was also bad in many other ways. After applying #wildplasser's suggestions, I managed to reduce the number of (still expensive) nested loops and also allow for limit to be taken without a sort, thus ensuring that Postgres most likely won't need to run the outer scan to its completition, which is where I derive the bulk of performance gains from.
I'm definitely not a DBA and unfortunately we don't have a DBA to consult within at our company. I was wondering if someone could give me a recommendation on how to improve this query, either by changing the query itself or adding indexes to the database.
Looking at the execution plan of the query it seems like the outer joins are killing the query. This query only returns 350k results, but it takes almost 30 seconds to complete. I don't know much about DB's, but I don't think this is good? Perhaps I'm wrong?
Any suggestions would be greatly appreciated. Thanks in advance.
As a side note this is obviously being create by an ORM and not me directly. We are using Linq-to-SQL.
SELECT
[t12].[value] AS [DiscoveryEnabled],
[t12].[value2] AS [isConnected],
[t12].[Interface],
[t12].[Description] AS [InterfaceDescription],
[t12].[value3] AS [Duplex],
[t12].[value4] AS [IsEnabled],
[t12].[value5] AS [Host],
[t12].[value6] AS [HostIP],
[t12].[value7] AS [MAC],
[t12].[value8] AS [MACadded],
[t12].[value9] AS [PortFast],
[t12].[value10] AS [PortSecurity],
[t12].[value11] AS [ShortHost],
[t12].[value12] AS [SNMPlink],
[t12].[value13] AS [Speed],
[t12].[value14] AS [InterfaceStatus],
[t12].[InterfaceType],
[t12].[value15] AS [IsUserPort],
[t12].[value16] AS [VLAN],
[t12].[value17] AS [Code],
[t12].[Description2] AS [Description],
[t12].[Host] AS [DeviceName],
[t12].[NET_OUID],
[t12].[DisplayName] AS [Net_OU],
[t12].[Enclave]
FROM (
SELECT
[t1].[DiscoveryEnabled] AS [value],
[t1].[IsConnected] AS [value2],
[t0].[Interface],
[t0].[Description],
[t2].[Duplex] AS [value3],
[t0].[IsEnabled] AS [value4],
[t3].[Host] AS [value5],
[t6].[Address] AS [value6],
[t3].[MAC] AS [value7],
[t3].[MACadded] AS [value8],
[t2].[PortFast] AS [value9],
[t2].[PortSecurity] AS [value10],
[t4].[Host] AS [value11],
[t0].[SNMPlink] AS [value12],
[t2].[Speed] AS [value13],
[t2].[InterfaceStatus] AS [value14],
[t8].[InterfaceType],
[t0].[IsUserPort] AS [value15],
[t2].[VLAN] AS [value16],
[t9].[Code] AS [value17],
[t9].[Description] AS [Description2],
[t7].[Host], [t7].[NET_OUID],
[t10].[DisplayName],
[t11].[Enclave],
[t7].[Decommissioned]
FROM [dbo].[IDB_Interface] AS [t0]
LEFT OUTER JOIN [dbo].[IDB_InterfaceLayer2] AS [t1] ON [t0].[IDB_Interface_ID] = [t1].[IDB_Interface_ID]
LEFT OUTER JOIN [dbo].[IDB_LANinterface] AS [t2] ON [t1].[IDB_InterfaceLayer2_ID] = [t2].[IDB_InterfaceLayer2_ID]
LEFT OUTER JOIN [dbo].[IDB_Host] AS [t3] ON [t2].[IDB_LANinterface_ID] = [t3].[IDB_LANinterface_ID]
LEFT OUTER JOIN [dbo].[IDB_Infrastructure] AS [t4] ON [t0].[IDB_Interface_ID] = [t4].[IDB_Interface_ID]
LEFT OUTER JOIN [dbo].[IDB_AddressMapIPv4] AS [t5] ON [t3].[IDB_AddressMapIPv4_ID] = ([t5].[IDB_AddressMapIPv4_ID])
LEFT OUTER JOIN [dbo].[IDB_AddressIPv4] AS [t6] ON [t5].[IDB_AddressIPv4_ID] = [t6].[IDB_AddressIPv4_ID]
INNER JOIN [dbo].[ART_Asset] AS [t7] ON [t7].[ART_Asset_ID] = [t0].[ART_Asset_ID]
LEFT OUTER JOIN [dbo].[NSD_InterfaceType] AS [t8] ON [t8].[NSD_InterfaceTypeID] = [t0].[NSD_InterfaceTypeID]
INNER JOIN [dbo].[NSD_InterfaceCode] AS [t9] ON [t9].[NSD_InterfaceCodeID] = [t0].[NSD_InterfaceCodeID]
INNER JOIN [dbo].[NET_OU] AS [t10] ON [t10].[NET_OUID] = [t7].[NET_OUID]
INNER JOIN [dbo].[NET_Enclave] AS [t11] ON [t11].[NET_EnclaveID] = [t10].[NET_EnclaveID]
) AS [t12]
WHERE ([t12].[Enclave] = 'USMC') AND (NOT ([t12].[Decommissioned] = 1))
LINQ-TO-SQL Query:
return from t in db.IDB_Interfaces
join v in db.IDB_InterfaceLayer3s on t.IDB_Interface_ID equals v.IDB_Interface_ID
join u in db.ART_Assets on t.ART_Asset_ID equals u.ART_Asset_ID
join c in db.NET_OUs on u.NET_OUID equals c.NET_OUID
join w in
(from d in db.IDB_InterfaceIPv4s
select new { d.IDB_InterfaceIPv4_ID, d.IDB_InterfaceLayer3_ID, d.IDB_AddressMapIPv4_ID, d.IDB_AddressMapIPv4.IDB_AddressIPv4.Address })
on v.IDB_InterfaceLayer3_ID equals w.IDB_InterfaceLayer3_ID
join h in db.NET_Enclaves on c.NET_EnclaveID equals h.NET_EnclaveID into enclaveLeftJoin
from i in enclaveLeftJoin.DefaultIfEmpty()
join m in
(from z in db.IDB_StandbyIPv4s
select new
{
z.IDB_InterfaceIPv4_ID,
z.IDB_AddressMapIPv4_ID,
z.IDB_AddressMapIPv4.IDB_AddressIPv4.Address,
z.Preempt,
z.Priority
})
on w.IDB_InterfaceIPv4_ID equals m.IDB_InterfaceIPv4_ID into standbyLeftJoin
from k in standbyLeftJoin.DefaultIfEmpty()
where t.ART_Asset.Decommissioned == false
select new NetIDBGridDataResults
{
DeviceName = u.Host,
Host = u.Host,
Interface = t.Interface,
IPAddress = w.Address,
ACLIn = v.InboundACL,
ACLOut = v.OutboundACL,
VirtualAddress = k.Address,
VirtualPriority = k.Priority,
VirtualPreempt = k.Preempt,
InterfaceDescription = t.Description,
Enclave = i.Enclave
};
As a rule (and this is very general), you want an index on:
JOIN fields (both sides)
Common WHERE filter fields
Possibly fields you aggregate
For this query, start with checking your JOIN criteria. Any one of those missing will force a table scan which is a big hit.
Looking at the execution plan of the query it seems like the outer joins are killing the query.
This query only returns 350k results, but it takes almost 30 seconds to complete. I don't know
much about DB's, but I don't think this is good? Perhaps I'm wrong?
A man has got to do waht a mana has got to do.
The joins may kill you, but when you need them YOU NEED THEM. Some tasks take long.
Make sure you ahve all indices you need.
Make sure your sql server is not a sad joke hardware wise.
All you can do.
I woudl bet someone has no clue about SQL and needs to be enlighted to the power of indices.