SQL queries slow when running in sequence, but quick when running separately - sql

I have a table that I will populate with values from an expensive calculation (with xquery from an immutable XML column). To speed up deployment to production I have precalculated values on a test server and saved to a file with BCP.
My script is as follows
-- Lots of other work, including modifying OtherTable
CREATE TABLE FOO (...)
GO
BULK INSERT FOO
FROM 'C:\foo.dat';
GO
-- rerun from here after the break
INSERT INTO FOO
(ID, TotalQuantity)
SELECT
e.ID,
SUM(e.Quantity) as TotalQuantity
FROM (select
o.ID,
h.n.value('TotalQuantity[1]/.', 'int') as TotalQuantity
FROM dbo.OtherTable o
CROSS APPLY XmlColumn.nodes('(item/.../salesorder/)') h(n)
WHERE o.ID NOT IN (SELECT DISTINCT ID FROM FOO)
) as E
GROUP BY e.ID
When I run the script in management studio the first two statements completes within seconds, but the last statement takes 4 hours to complete. Since no rows are added to the OtherTable since my foo.dat was computed management studio reports (0 row(s) affected).
If I cancel the query execution after a couple of minutes and selects just the last query and run that separately it completes within 5 seconds.
Notable facts:
The OtherTable contains 200k rows and the data in XmlColumn is pretty large, total table size ~3GB
The FOO table gets 1.3M rows
What could possibly make the difference?
Management studio has implicit transactions turned off. Is far as I can understand each statement will then run in its own transaction.
Update:
If I first select and run the script until -- rerun from here after the break, then select and run just the last query, it is still slow until I cancel execution and try again. This at least rules out any effects of running "together" with the previous code in the script and boils down to the same query being slow on first execution and fast on the second (running with all other conditions the same).

Probably different execution plans. See Slow in the Application, Fast in SSMS? Understanding Performance Mysteries.

Could it possibly be related to the statistics being completely wrong on the newly created Foo table? If SQL Server automatically updates the statistics when it first runs the query, the second run would have its execution plan created from up-to-date statistics.
What if you check the statistics right after the bulk insert (with the STATS_DATE function) and then checks it again after having cancelled the long-running query? Did the stats get updated, even though the query was cancelled?
In that case, an UPDATE STATISTICS on Foo right after the bulk insert could help.

Not sure exactly why it helped, but i rewrote the last query to an left outer join instead and suddenly the execution dropped to 15 milliseconds.
INSERT INTO FOO
(ID, TotalQuantity)
SELECT
e.ID,
SUM(e.Quantity) as TotalQuantity
FROM (select
o.ID,
h.n.value('TotalQuantity[1]/.', 'int') as TotalQuantity
FROM dbo.OtherTable o
INNER JOIN FOO f ON o.ID = f.ID
CROSS APPLY o.XmlColumn.nodes('(item/.../salesorder/)') h(n)
WHERE f.ID = null
) as E
GROUP BY e.ID

Related

force Oracle to process recursive CTE on remote db site (perhaps using DRIVING_SITE hint)

I am trying to fetch data from remote table. The data is expanded from seed set of data in local table using recursive CTE. The query is very slow (300 seed rows to 800 final rows takes 7 minutes).
For other "tiny local, huge remote"-cases with no recursive query the DRIVING_SITE hint works excellently. I also tried to export seed set from local table into auxiliary table on remotedb with same structure and - being logged in remotedb - ran query as pure local query (my_table as p, my_table_seed_copy as i). It took 4s, which encouraged me to believe forcing query to remote site would make query fast.
What's the correct way to force Oracle to execute recursive query on the remote site?
with s (id, data) as (
select p.id, p.data
from my_table#remotedb p
where p.id in (select i.id from my_table i)
union all
select p.id, p.data
from s
join my_table#remotedb p on ...
)
select /*+DRIVING_SITE(p)*/ s.*
from s;
In the query above, I tried
select /*+DRIVING_SITE(p)*/ s.* in main select
select /*+DRIVING_SITE(s)*/ s.* in main select
omitting DRIVING_SITE in whole query
select /*+DRIVING_SITE(x)*/ s.* from s, dual#remotedb x as main select
select /*+DRIVING_SITE(p)*/ p.id, p.data in first inner select
select /*+DRIVING_SITE(p)*/ p.id, p.data in both inner selects
select /*+DRIVING_SITE(p) MATERIALIZE*/ p.id, p.data in both inner selects
(just for completeness - rewriting to connect by is not applicable for this case - actually the query is more complex and uses constructs which cannot be expressed by connect by)
All without success (i.e. data returned after 7 minutes).
Recursive query actually performs breadth-first search - seed rows represent 0-th level and recursive part finds element on n-th level from elements on (n-1)-th level. Original query was intended to be part of merge ... using ... clause.
Hence I rewrote query to PLSQL loop. Every cycle generates one level. Merge prevents insertion of duplicates so finally no new row is added and loop exits (transitive closure is constructed). Pseudocode:
loop
merge into my_table using (
select /*+DRIVING_SITE(r)*/ distinct r.* /*###BULKCOLLECT###*/
from my_table l
join my_table#remotedb r on ... -- same condition as s and p in original question are joined on
) ...
exit when rows_inserted = 0;
end loop;
Actual code is not so simple since DRIVING_SITE actually does not directly work with merge so we have to transfer data via work collection but that's different story. Also the count of inserted rows cannot be easily determined, it must be computed as difference between row count after and before merge.
The solution is not ideal. Anyway it's much faster than recursive CTE (30s, 13 cycles) because queries are provably utilizing the DRIVING_SITE hint.
I will leave question open for some time to wait if somebody finds answer how to make recursive query working or proving it is not possible.

How do you perform large queries at the same time with the index job?

I have a large database (size 1.7 TB) and have a maintenance index job to rebuild-reorganize indexes. This job is scheduled at 11:00 pm.
This morning, i was just checking the queries that running on the server and i noticed that the index job is still running (more than 10 hours) because another t-sql query that has been running on the server more than 22 hours and locked the table that the job was trying to rebuild the indexes of it. It was like an endless progress so i had to kill the blocking session (169) to let the index job keeps running. My question is; how can i avoid locking tables that index job is working on. I know that rebuilding index is locking the table bcs its offline, but should i do some optimizing on the t-sql query which was running more than 22 hours ? Bcs this t-sql query is running oftenly by our ERP application in the day.
The query is;
SELECT T1.ACCOUNTNUM,T1.AMOUNTCUR,T1.AMOUNTMST,T1.DUEDATE,T1.RECID,T1.RECVERSION,T1.REFRECID,T1.TRANSDATE,T1.RECVERSION,T2.INVOICE
,T2.TRANSTYPE,T2.TRANSDATE,T2.AMOUNTCUR,T2.ACCOUNTNUM,T2.VOUCHER,T2.COLLECTIONLETTERCODE,T2.SETTLEAMOUNTCUR,T2.CURRENCYCODE,
T2.CUSTBILLINGCLASSIFICATION,T2.RECVERSION,T2.RECID,T3.ACCOUNTNUM,T3.PARTY,T3.CURRENCY,T3.RECID,T3.RECVERSION
FROM **CUSTTRANSOPEN** T1
CROSS JOIN CUSTTRANS T2
CROSS JOIN CUSTTABLE T3
WHERE (((T1.PARTITION=#P1) AND (T1.DATAAREAID=#P2)) AND (T1.DUEDATE<#P3)) AND (((T2.PARTITION=#P4) AND
(T2.DATAAREAID=#P5)) AND (((((((T2.TRANSTYPE<=#P6) OR (T2.TRANSTYPE=#P7)) OR ((T2.TRANSTYPE=#P8) OR (T2.TRANSTYPE=#P9)))
OR (((T2.TRANSTYPE=#P10) OR (T2.TRANSTYPE=#P11)) OR (T2.TRANSTYPE=#P12))) AND (T2.AMOUNTCUR>=#P13))
AND (T1.ACCOUNTNUM=T2.ACCOUNTNUM)) AND (T1.REFRECID=T2.RECID))) AND (((T3.PARTITION=#P14) AND (T3.DATAAREAID=#P15))
AND (T2.ACCOUNTNUM=T3.ACCOUNTNUM)) ORDER BY T1.DUEDATE OPTION(FAST 5)
** The locked table is: CUSTTRANSOPEN
I mean, for ex. should i put a WITH (NOLOCK) statement in the query ?
How do you perform large queries at the same time with the index job?
** I have standart edition sql server. So 'online rebuilding' is not possible.
You have two problems:
- Large query, which might be tuned
- Simultaneous running ALTER INDEX
Tuning query:
You may put NOLOCK only if you do not care about the result.
Your query does cartesian joins, which supposed to produce multiplication of rows of all three tables. No wonder that it takes 20 hours. It might be not the intention. So, determine what exactly you want. Here is a sample of simplified query. Verify if that produces the same logic:
SELECT T1.ACCOUNTNUM, T1.AMOUNTCUR, T1.AMOUNTMST, T1.DUEDATE, T1.RECID
, T1.RECVERSION, T1.REFRECID, T1.TRANSDATE, T1.RECVERSION, T2.INVOICE
, T2.TRANSTYPE, T2.TRANSDATE, T2.AMOUNTCUR, T2.ACCOUNTNUM, T2.VOUCHER
, T2.COLLECTIONLETTERCODE, T2.SETTLEAMOUNTCUR, T2.CURRENCYCODE, T2.CUSTBILLINGCLASSIFICATION, T2.RECVERSION
, T2.RECID, T3.ACCOUNTNUM, T3.PARTY, T3.CURRENCY, T3.RECID, T3.RECVERSION
FROM **CUSTTRANSOPEN** AS T1
INNER JOIN CUSTTRANS AS T2 ON T1.ACCOUNTNUM=T2.ACCOUNTNUM AND T1.REFRECID=T2.RECID AND
T2.PARTITION=#P4 AND T2.DATAAREAID=#P5 AND T2.AMOUNTCUR>=#P13 AND
(T2.TRANSTYPE<=#P6 OR T2.TRANSTYPE IN (#P7, #P8, #P9, #P10, #P11, #P12)
INNER JOIN CUSTTABLE AS T3 ON T2.ACCOUNTNUM=T3.ACCOUNTNUM AND T3.PARTITION=#P14 AND T3.DATAAREAID=#P15 AND
WHERE T1.PARTITION=#P1 AND T1.DATAAREAID=#P2 AND T1.DUEDATE<#P3 AND
ORDER BY T1.DUEDATE
OPTION (FAST 5);
You have to look at the execution plan
Look if plan is better if you exclude OPTION (FAST 5)
See if you can improve query by indexing.
You can do Altering indexes on one-by-one basis with exclusion of your CUSTTRANSOPEN table. and ALTER its indexes when query has finished.

SubQuery vs TempTable before Merge

I have a complex query that I want to use as the Source of a Merge into a table. This will be executed over millions of rows. Currently I am trying to apply constraints to the data by inserting it into a temp table before the merge.
The operations are:
Filter out duplicate data.
Join some tables to pull in additional data
Insert into the temp table.
Here is the query.
-- Get all Orders that aren't in the system
WITH Orders AS
(
SELECT *
FROM [Staging].Orders o
WHERE NOT EXISTS
(
SELECT 1
FROM Maps.VendorBOrders vbo
JOIN OrderFact of
ON of.Id = vbo.OrderFactId
AND InternalOrderId = o.InternalOrderId
AND of.DataSetId = o.DataSetId
AND of.IsDelete = 0
)
)
INSERT INTO #VendorBOrders
(
CustomerId
,OrderId
,OrderTypeId
,TypeCode
,LineNumber
,FromDate
,ThruDate
,LineFromDate
,LineThruDate
,PlaceOfService
,RevenueCode
,BillingProviderId
,Cost
,AdjustmentTypeCode
,PaymentDenialCode
,EffectiveDate
,IDRLoadDate
,RelatedOrderId
,DataSetId
)
SELECT
vc.CustomerId
,OrderId
,OrderTypeId
,TypeCode
,LineNumber
,FromDate
,ThruDate
,LineFromDate
,LineThruDate
,PlaceOfService
,RevenueCode
,bp.Id
,Cost
,AdjustmentTypeCode
,PaymentDenialCode
,EffectiveDate
,IDRLoadDate
,ro.Id
,o.DataSetId
FROM
Orders o
-- Join related orders to match orders sharing same instance
JOIN Maps.VendorBRelatedOrder ro
ON ro.OrderControlNumber = o.OrderControlNumber
AND ro.EquitableCustomerId = o.EquitableCustomerId
AND ro.DataSetId = o.DataSetId
JOIN BillingProvider bp
ON bp.ProviderNPI = o.ProviderNPI
-- Join on customers and fail if the customer doesn't exist
LEFT OUTER JOIN [Maps].VendorBCustomer vc
ON vc.ExtenalCustomerId = o.ExtenalCustomerId
AND vc.VendorId = o.VendorId;
I am wondering if there is anything I can do to optimize it for time. I have tried using the DB Engine Tuner, but this query takes 100x more CPU Time than the other queries I am running. Is there anything else that I can look into or can the query not be improved further?
CTE is just syntax
That CTE is evaluated (run) on that join
First just run it as a select statement (no insert)
If the select is slow then:
Move that CTE to a #TEMP so it is evaluated once and materialized
Put an index (PK if applicable) on the three join columns
If the select is not slow then it is insert time on #VendorBOrders
Fist only create PK and sort the insert on the PK so as not to fragment that clustered index
Then AFTER the insert is complete build any other necessary indexes
Generally when I do speed testing I perform checks on the parts of SQL to see where the problem lies. Turn on the 'Execution plan' and see where a lot of the time is going. Also if you want to just do the quick and dirty highlight your CTE and run just that. Is that fast, yes, move on.
I have at times found a single index being off throws off a whole complex logic of joins by merely having the database do one part of something large and then finding that piece.
Another idea is that if you have a fast tempdb on a production environment or the like, dump your CTE to a temp table as well. Index on that and see if that speeds things up. Sometimes CTE's, table variables, and temp tables lose some performance at joins. I have found that creating an index on a partial object will improve performance at times but you are also putting more load on the tempdb to do this, so keep that in mind.

Can this SQL Query be optimized to run faster?

I have an SQL Query (For SQL Server 2008 R2) that takes a very long time to complete. I was wondering if there was a better way of doing it?
SELECT #count = COUNT(Name)
FROM Table1 t
WHERE t.Name = #name AND t.Code NOT IN (SELECT Code FROM ExcludedCodes)
Table1 has around 90Million rows in it and is indexed by Name and Code.
ExcludedCodes only has around 30 rows in it.
This query is in a stored procedure and gets called around 40k times, the total time it takes the procedure to finish is 27 minutes.. I believe this is my biggest bottleneck because of the massive amount of rows it queries against and the number of times it does it.
So if you know of a good way to optimize this it would be greatly appreciated! If it cannot be optimized then I guess im stuck with 27 min...
EDIT
I changed the NOT IN to NOT EXISTS and it cut the time down to 10:59, so that alone is a massive gain on my part. I am still going to attempt to do the group by statement as suggested below but that will require a complete rewrite of the stored procedure and might take some time... (as I said before, im not the best at SQL but it is starting to grow on me. ^^)
In addition to workarounds to get the query itself to respond faster, have you considered maintaining a column in the table that tells whether it is in this set or not? It requires a lot of maintenance but if the ExcludedCodes table does not change often, it might be better to do that maintenance. For example you could add a BIT column:
ALTER TABLE dbo.Table1 ADD IsExcluded BIT;
Make it NOT NULL and default to 0. Then you could create a filtered index:
CREATE INDEX n ON dbo.Table1(name)
WHERE IsExcluded = 0;
Now you just have to update the table once:
UPDATE t
SET IsExcluded = 1
FROM dbo.Table1 AS t
INNER JOIN dbo.ExcludedCodes AS x
ON t.Code = x.Code;
And ongoing you'd have to maintain this with triggers on both tables. With this in place, your query becomes:
SELECT #Count = COUNT(Name)
FROM dbo.Table1 WHERE IsExcluded = 0;
EDIT
As for "NOT IN being slower than LEFT JOIN" here is a simple test I performed on only a few thousand rows:
EDIT 2
I'm not sure why this query wouldn't do what you're after, and be far more efficient than your 40K loop:
SELECT src.Name, COUNT(src.*)
FROM dbo.Table1 AS src
INNER JOIN #temptable AS t
ON src.Name = t.Name
WHERE src.Code NOT IN (SELECT Code FROM dbo.ExcludedCodes)
GROUP BY src.Name;
Or the LEFT JOIN equivalent:
SELECT src.Name, COUNT(src.*)
FROM dbo.Table1 AS src
INNER JOIN #temptable AS t
ON src.Name = t.Name
LEFT OUTER JOIN dbo.ExcludedCodes AS x
ON src.Code = x.Code
WHERE x.Code IS NULL
GROUP BY src.Name;
I would put money on either of those queries taking less than 27 minutes. I would even suggest that running both queries sequentially will be far faster than your one query that takes 27 minutes.
Finally, you might consider an indexed view. I don't know your table structure and whether your violate any of the restrictions but it is worth investigating IMHO.
You say this gets called around 40K times. WHy? Is it in a cursor? If so do you really need a cursor. Couldn't you put the values you want for #name in a temp table and index it and then join to it?
select t.name, count(t.name)
from table t
join #name n on t.name = n.name
where NOT EXISTS (SELECT Code FROM ExcludedCodes WHERE Code = t.code)
group by t.name
That might get you all your results in one query and is almost certainly faster than 40K separate queries. Of course if you need the count of all the names, it's even simpleer
select t.name, count(t.name)
from table t
NOT EXISTS (SELECT Code FROM ExcludedCodes WHERE Code = t
group by t.name
NOT EXISTS typically performs better than NOT IN, but you should test it on your system.
SELECT #count = COUNT(Name)
FROM Table1 t
WHERE t.Name = #name AND NOT EXISTS (SELECT 1 FROM ExcludedCodes e WHERE e.Code = t.Code)
Without knowing more about your query it's tough to supply concrete optimization suggestions (i.e. code suitable for copy/paste). Does it really need to run 40,000 times? Sounds like your stored procedure needs reworking, if that's feasible. You could exec the above once at the start of the proc and insert the results in a temp table, which can keep the indexes from Table1, and then join on that instead of running this query.
This particular bit might not even be the bottleneck that makes your query run 27 minutes. For example, are you using a cursor over those 90 million rows, or scalar valued UDFs in your WHERE clauses?
Have you thought about doing the query once and populating the data in a table variable or temp table? Something like
insert into #temp (name, Namecount)
values Name, Count(name)
from table1
where name not in(select code from excludedcodes)
group by name
And don't forget that you could possibly use a filtered index as long as the excluded codes table is somewhat static.
Start evaluating the execution plan. Which is the heaviest part to compute?
Regarding the relation between the two tables, use a JOIN on indexed columns: indexes will optimize query execution.

Why is inserting into and joining #temp tables faster?

I have a query that looks like
SELECT
P.Column1,
P.Column2,
P.Column3,
...
(
SELECT
A.ColumnX,
A.ColumnY,
...
FROM
dbo.TableReturningFunc1(#StaticParam1, #StaticParam2) AS A
WHERE
A.Key = P.Key
FOR XML AUTO, TYPE
),
(
SELECT
B.ColumnX,
B.ColumnY,
...
FROM
dbo.TableReturningFunc2(#StaticParam1, #StaticParam2) AS B
WHERE
B.Key = P.Key
FOR XML AUTO, TYPE
)
FROM
(
<joined tables here>
) AS P
FOR XML AUTO,ROOT('ROOT')
P has ~ 5000 rows
A and B ~ 4000 rows each
This query has a runtime performance of ~10+ minutes.
Changing it to this however:
SELECT
P.Column1,
P.Column2,
P.Column3,
...
INTO #P
SELECT
A.ColumnX,
A.ColumnY,
...
INTO #A
FROM
dbo.TableReturningFunc1(#StaticParam1, #StaticParam2) AS A
SELECT
B.ColumnX,
B.ColumnY,
...
INTO #B
FROM
dbo.TableReturningFunc2(#StaticParam1, #StaticParam2) AS B
SELECT
P.Column1,
P.Column2,
P.Column3,
...
(
SELECT
A.ColumnX,
A.ColumnY,
...
FROM
#A AS A
WHERE
A.Key = P.Key
FOR XML AUTO, TYPE
),
(
SELECT
B.ColumnX,
B.ColumnY,
...
FROM
#B AS B
WHERE
B.Key = P.Key
FOR XML AUTO, TYPE
)
FROM #P AS P
FOR XML AUTO,ROOT('ROOT')
Has a performance of ~4 seconds.
This makes not a lot of sense, as it would seem the cost to insert into a temp table and then do the join should be higher by default. My inclination is that SQL is doing the wrong type of "join" with the subquery, but maybe I've missed it, there's no way to specify the join type to use with correlated subqueries.
Is there a way to achieve this without using #temp tables/#table variables via indexes and/or hints?
EDIT: Note that dbo.TableReturningFunc1 and dbo.TableReturningFunc2 are inline TVF's, not multi-statement, or they are "parameterized" view statements.
Your procedures are being reevaluated for each row in P.
What you do with the temp tables is in fact caching the resultset generated by the stored procedures, thus removing the need to reevaluate.
Inserting into a temp table is fast because it does not generate redo / rollback.
Joins are also fast, since having a stable resultset allows possibility to create a temporary index with an Eager Spool or a Worktable
You can reuse the procedures without temp tables, using CTE's, but for this to be efficient, SQL Server needs to materialize the results of CTE.
You may try to force it do this with using an ORDER BY inside a subquery:
WITH f1 AS
(
SELECT TOP 1000000000
A.ColumnX,
A.ColumnY
FROM dbo.TableReturningFunc1(#StaticParam1, #StaticParam2) AS A
ORDER BY
A.key
),
f2 AS
(
SELECT TOP 1000000000
B.ColumnX,
B.ColumnY,
FROM dbo.TableReturningFunc2(#StaticParam1, #StaticParam2) AS B
ORDER BY
B.Key
)
SELECT …
, which may result in Eager Spool generated by the optimizer.
However, this is far from being guaranteed.
The guaranteed way is to add an OPTION (USE PLAN) to your query and wrap the correspondind CTE into the Spool clause.
See this entry in my blog on how to do that:
Generating XML in subqueries
This is hard to maintain, since you will need to rewrite your plan each time you rewrite the query, but this works well and is quite efficient.
Using the temp tables will be much easier, though.
This answer needs to be read together with Quassnoi's article
http://explainextended.com/2009/05/28/generating-xml-in-subqueries/
With judicious application of CROSS APPLY, you can force the caching or shortcut evaluation of inline TVFs. This query returns instantaneously.
SELECT *
FROM (
SELECT (
SELECT f.num
FOR XML PATH('fo'), ELEMENTS ABSENT
) AS x
FROM [20090528_tvf].t_integer i
cross apply (
select num
from [20090528_tvf].fn_num(9990) f
where f.num = i.num
) f
) q
--WHERE x IS NOT NULL -- covered by using CROSS apply
FOR XML AUTO
You haven't provided real structures so it's hard to construct something meaningful, but the technique should apply as well.
If you change the multi-statement TVF in Quassnoi's article to an inline TVF, the plan becomes even faster (at least one order of magnitude) and the plan magically reduces to something I cannot understand (it's too simple!).
CREATE FUNCTION [20090528_tvf].fn_num(#maxval INT)
RETURNS TABLE
AS RETURN
SELECT num + #maxval num
FROM t_integer
Statistics
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
(10 row(s) affected)
Table 't_integer'. Scan count 2, logical reads 22, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 2 ms.
It is a problem with your sub-query referencing your outer query, meaning the sub query has to be compiled and executed for each row in the outer query.
Rather than using explicit temp tables, you can use a derived table.
To simplify your example:
SELECT P.Column1,
(SELECT [your XML transformation etc] FROM A where A.ID = P.ID) AS A
If P contains 10,000 records then SELECT A.ColumnX FROM A where A.ID = P.ID will be executed 10,000 times.
You can instead use a derived table as thus:
SELECT P.Column1, A2.Column FROM
P LEFT JOIN
(SELECT A.ID, [your XML transformation etc] FROM A) AS A2
ON P.ID = A2.ID
Okay, not that illustrative pseudo-code, but the basic idea is the same as the temp table, except that SQL Server does the whole thing in memory: It first selects all the data in "A2" and constructs a temp table in memory, then joins on it. This saves you having to select it to TEMP yourself.
Just to give you an example of the principle in another context where it may make more immediate sense. Consider employee and absence information where you want to show the number of days absence recorded for each employee.
Bad: (runs as many queryes as there are employees in the DB)
SELECT EmpName,
(SELECT SUM(absdays) FROM Absence where Absence.PerID = Employee.PerID) AS Abstotal
FROM Employee
Good: (Runs only two queries)
SELECT EmpName, AbsSummary.Abstotal
FROM Employee LEFT JOIN
(SELECT PerID, SUM(absdays) As Abstotal
FROM Absence GROUP BY PerID) AS AbsSummary
ON AbsSummary.PerID = Employee.PerID
There are several possible reasons why using intermediate Temp tables might speed up a query, but the most likely in your case is that the functions which are being called (but are not listed), are probably Multi-statement TVF's and not in-line TVF's. Multi-statement TVF's are opaque to the optimization of their calling queries and thus the optimizer cannot tell if there are any oppurtunities for re-use of data, or other logical/physical operator re-ordering optimizations. Thus, all it can do is to re-execute the TVFs every time that the containing query is supposed to produce another row with the XML columns.
In short, multi-statement TVF's frustrate the optimizer.
The usual solutions, in order of (typical) preference are:
Re-write the offending multi-statement TVF to be an in-line TVF
In-line the function code into the calling query, or
Dump the offending TVF's data into a temp table. which is what you've done...
Consider using the WITH common_table_expression construct for what you now have as sub-selects or temporary tables, see http://msdn.microsoft.com/en-us/library/ms175972(SQL.90).aspx .
This makes not a lot of sense, as it
would seem the cost to insert into a
temp table and then do the join should
be higher by de> This makes not a lot of sense, as it
would seem the cost to insert into a
temp table and then do the join should
be higher by default.fault.
With temporary tables, you explitly instruct Sql Server which intermediate storage to use. But if you stash everything in a big query, Sql Server will decide for itself. The difference is not really that big; at the end of the day, temporary storage is used, whether you specify it as a temp table or not.
In your case, temporary tables work faster, so why not stick to them?
I agreed, Temp table is a good concept. When the row count increases in a table an example of 40 million rows and i want to update multiple columns on a table by applying joins with other table in that case i would always prefer to use Common table expression to update the columns in select statement using case statement, now my select statement result set contains updated rows.Inserting 40 million records into a temp table with select statement using case statement took 21 minutes for me and then creating an index took 10 minutes so my insert and index creation time took 30 minutes. Then i am going to apply update by joining temp table updated result set with main table. It took 5 minutes to update 10 million records out of 40 million records so my overall update time for 10 million records took almost 35 minutes vs 5 minutes from Common table expression. My choice in that case is Common table expression.
If temp tables turn out to be faster in your particular instance, you should instead use a table variable.
There is a good article here on the differences and performance implications:
http://www.codeproject.com/KB/database/SQP_performance.aspx