I have a query containing three inner join statements in the Where clause. The query takes roughly 2 minutes to execute. If I simply change the order of two of the inner joins, performance drops to 40 seconds.
How can doing nothing but changing the order of the inner joins have such a drastic impact of query performance? I would have thought the optimizer would figure all this out.
SQL is declarative, that is, the JOIN order should not matter.
However it can in practice, say, if it's a complex query when the optimiser does not explore all options (which in theory could take months).
Another option is that it's a very different query if you reorder and you get different results, but this is usually with OUTER JOINs.
And it could also be the way the ON clause is specified It has to change if you reorder the FROM clause. Unless you are using the older (and bad) JOIN-in-the-WHERE-clause.
Finally, if it's a concern you could use parenthesis to change evaluation order to make your intentions clear, say, filter on a large table first to generate a derived table.
Because by changing the order of the joins, SQL Server is coming up with a different execution plan for your query (chances are it's changing the way it's filtering the tables based on your joins).
In this case, I'm guessing you have several large tables...one of which performs the majority of the filtering.
In one query, your joins are joining several of the large tables together and then filtering the records at the end.
In the other, you are filtering the first table down to a much smaller sub-set of the data...and then joining the rest of the tables in. Since that initial table got filtered before joining the other large recordsets, performance is much better.
You could always verify but running the query with the 'Show query plan' option enabled and see what the query plan is for the two different join orders.
I would have thought it was smart enough to do that as well, but clearly it's still performing the joins in the order you explicitly list them... As to why that affects the performance, if the first join produces an intermediate result set of only 100 records in one ordering scheme, then the second join will be from that 100-record set to the third table.
If putting the other join first produces a first intermediate result set of one million records, then the second join will be from a one million row result set to the third table...
Related
I have multiple SQL queries that I run one after the other to get a set of data. In each query, there are a bunch of tables joined that are exactly the same with the other queries. For example:
Query1
SELECT * FROM
Product1TableA A1
INNER JOIN Product1TableB B on A1.BId = B.Id
INNER JOIN CommonTable1 C on C.Id = B.CId
INNER JOIN CommonTable2 D on D.Id = B.DId
...
Query2
SELECT * FROM Product2TableA A2
INNER JOIN Product2TableB B on A2.BId = B.Id
INNER JOIN CommonTable1 C on C.Id = B.CId
INNER JOIN CommonTable2 D on D.Id = B.DId
...
I am playing around re-ordering the joins (around 2 dozen tables joined per query) and I read here that they should not really affect query execution unless SQL "gives up" during optimization because of how big the query is...
What I am wondering is if bunching up common table joins at the start of all my queries actually helps...
In theory, the order of the joins in the from clause doesn't make a difference on query performance. For a small number of tables, there should be no difference. The optimizer should find the best execution path.
For a larger number of tables, the optimizer may have to short-circuit its search regarding join order. It would then be using heuristics -- and these could be affected by join order.
Earlier queries would have no effect on a particular execution plan.
If you are having problems with performance, I am guessing that join order is not the root cause. The most common problem that I have in SQL Server are inappropriate nested-loop joins -- and these can be handled with an optimizer hint.
I think I understood what he was trying to say/to do:
What I am wondering is if bunching up common table joins at the start
of all my queries actually helps...
Imagine that you have some queries and every query has more than 3 inner joins. The queries are different but always have (for example) 3 tables in common that are joined on the same fields. Now the question is:
what will happen if every query will start with these 3 tables in join, and all the other tables are joined after?
The answer is it will change nothing, i.e. optimizer will rearrange the tables in the way it thinks will bring to optimal execution.
The thing may change if, for example, you save the result of these 3 joins into a temporary table and then use this saved result to join with other tables. But this depends on the filters that your queries use. If you have appropriate indexes and your query filters are selective enough(so that your query returns very few rows) there is no need to cache intermediate no-filtered result that has too many rows because optimizer can choose to first filter every table and only then to join them
Gordon's answer is a good explanation, but this answer explains the JOIN's behavior and also specifies that SQL Server's version is relevant:
Although the join order is changed in optimisation, the optimiser
does't try all possible join orders. It stops when it finds what it
considers a workable solution as the very act of optimisation uses
precious resources.
While the optimizer tries its best in choosing a good order for the JOINs, having many JOINs creates a bigger chance of obtaining a not so good plan.
Personally, I have seen many JOINs in some views within an ERP and they usually ran ok. However, from time to time (based on client's data volume, instance configuration etc.), some selects from these views took much more than expected.
If this data reaches an actual application (.NET, JAVA etc.), a way is to cache information from all small tables, store it as dictionaries (hashes) and perform O(1) lookups based on the keys.
This provides the advantages of reducing the JOIN count and not performing reads from the database for these tables (except once when caching data). However, this increases the complexity of the application (cache management).
Another solution is use temporary tables and populate them in multiple queries to avoid many JOINs per single query. This solution usually performs better and also increases debuggability (if the query does not provide the correct data or no data at all, which of the 10-15 JOINs is the problem?).
So, my answer to your question is: you might get some benefit from reordering the JOIN clauses, but I recommend avoiding lots of JOINs in the first place.
I'm writing a report that needs to pull data from a view that I'm not authorized to modify. The view is missing a column that I need for a report, so I attempted to join it against one of its source tables. However, this is causing it to take twice as long to execute.
A look at the execution plan shows that it performs two scans of the table and merge joins them together. Is there a hint I can use to convince the query optimizer to visit the table only once?
Abstracted fiddle: http://sqlfiddle.com/#!3/4a44d/1/0
Because the optimizer will never eliminate a table access specified in the query unless nothing is actually referenced from that table.
There is no way to access a table less times than it is referenced in the query (as far as I know from 13 years experience). There may be a few other cases, but the only case I know of where the query optimizer can do less accesses than the number of object references is when it can optimize away a left outer or right outer join when nothing is accessed from the outer table and it is known from constraints that excluding the work will not change the number of rows or which rows would be returned in the result.
I have a query that has joins between 15 tables or even more. I need to optimize the response time. I created some index columns, changed some conditions from NOT IN to NOT EXISTS, but I found myself wondering about this.
Does the order of these joins affect the response time?
The order of JOINs definitely does affect performance, as well as the type of join. INNER JOINs, generally, will yield quicker results than RIGHT or LEFT OUTER JOINs due to the selectivity of the join.
SQL Server also tries its best to optimize every query, but at 15 joins it may have a hard time. Consider breaking the statement up into smaller junks of fewer joins. A strategy to resolve things like this that I've used in the past is to create a temporary table to store the results, then INSERT into it and UPDATE it accordingly through several different statements, with the 15 tables being spread out into the appropriate insert/update spots.
The order of the joins definitely matters. The question is, can you change the order by rewriting the SQL? The query optimizer doesn't necessarily care what order you write the joins in, depending on the type of join. The query optimizer does its best to find the most efficient execution plan based on the SQL you've written. However, it is no where close to perfect. If you notice after looking at the execution plan that it could be done more efficiently another way you can trick it into doing it your way and see if it helps.
One way to trick it is to use temp tables to pare down the result set before joining to large tables. This will allow less records to be selected which will reduce I/O.
Another way, as demonstrated by Adam Machanic, is to use a top in the select clause with an order by.
Just wondering if anyone has any tricks (or tools) they use to visualize joins. You know, you write the perfect query, hit run, and after it's been running for 20 minutes, you realize you've probably created a cartesian join.
I sometimes have difficulty visualizing what's going to happen when I add another join statement and wondered if folks have different techniques they use when trying to put together lots of joins.
Always keep the end in mind.
Ascertain which are the columns you need
Try to figure out the minimum number of tables which will be needed to do it.
Write your FROM part with the table which will give max number of columns. eg FROM Teams T
Add each join one by one on a new line. Ensure whether you'll need OUTER, INNER, LEFT, RIGHT JOIN at each step.
Usually works for me. Keep in mind that it is Structured query language. Always break your query into logical lines and it's much easier.
Every join combines two resultsets into one. Each may be from a single database table or a temporary resultset which is the result of previous join(s) or of a subquery.
Always know the order that joins are processed, and, for each join, know the nature of the two temporary result sets that you are joining together. Know what logical entity each row in that resultset represents, and what attributes in that resultset uniquely identify that entity. If your join is intended to always join one row to one row, these key attributes are the ones you need to use (in join conditions) to implement the join. If your join is intended to generate some kind of cartesian product, then it is critical to understand the above to understand how the join conditions (whatever they are) will affect the cardinality of the new joined resultset.
Try to be consistent in the use of outer join directions. I try to always use Left Joins when I need an outer join, as I "think" of each join as "joining" the new table (to the right) to whatever I have already joined together (on the left) of the Left Join statement...
Run an explain plan.
These are always hierarchical trees (to do this, first I must do that). Many tools exist to make these plans into graphical trees, some in SQL browsers, (e.g, Oracle SQLDeveloper, whatever SQlServer's GUI client is called). If you don't have a tool, most plan text ouput includes a "depth" column, which you can use to indent the line.
What you want to look for is the cost of each row. (Note that for Oracle, though, higher costs can mean less time, if it allows Oracle to do a hash join rather than nested loops, and if the final result set has high cardinality (many, many rows).)
I have never found a better tool than thinking it through and using my own mind.
If the query is so complicated that you cannot do that, you may want to use either CTE's, views, or some other carefully organized subqueries to break it into logical pieces so you can easily understand and visualize each piece even if you cannot manage the whole.
Also, if your concern is effeciency, then SQL Server Management Studio 2005 or later lets you get estimated query execution plans without actually executing the query. This can give you very good ideas of where problems lie, if you are using MS SQL Server.
I have a relatively simple query joining two tables. The "Where" criteria can be expressed either in the join criteria or as a where clause. I'm wondering which is more efficient.
Query is to find max sales for a salesman from the beginning of time until they were promoted.
Case 1
select salesman.salesmanid, max(sales.quantity)
from salesman
inner join sales on salesman.salesmanid =sales.salesmanid
and sales.salesdate < salesman.promotiondate
group by salesman.salesmanid
Case 2
select salesman.salesmanid, max(sales.quantity)
from salesman
inner join sales on salesman.salesmanid =sales.salesmanid
where sales.salesdate < salesman.promotiondate
group by salesman.salesmanid
Note Case 1 lacks a where clause altogether
RDBMS is Sql Server 2005
EDIT
If the second piece of the join criteria or the where clause was sales.salesdate < some fixed date so its not actually any criteria of joining the two tables does that change the answer.
I wouldn't use performance as the deciding factor here - and quite honestly, I don't think there's any measurable performance difference between those two cases, really.
I would always use case #2 - why? Because in my opinion, you should only put the actual criteria that establish the JOIN between the two tables into the JOIN clause - everything else belongs in the WHERE clause.
Just a matter of keeping things clean and put things where they belong, IMO.
Obviously, there are cases with LEFT OUTER JOINs where the placement of the criteria does make a difference in terms of what results get returned - those cases would be excluded from my recommendation, of course.
Marc
You can run the execution plan estimator and sql profiler to see how they stack up against each other.
However, they are semantically the same underneath the hood according to this SQL Server MVP:
http://www.eggheadcafe.com/conversation.aspx?messageid=29145383&threadid=29145379
I prefer to have any hard coded criteria in the join. It makes the SQL much more readable and portable.
Readability:
You can see exactly what data you're going to get because all the table criteria is written right there in the join. In large statements, the criteria may be buried within 50 other expressions and is easily missed.
Portability:
You can just copy a chunk out of the FROM clause and paste it somewhere else. That gives the joins and any criteria you need to go with it. If you always use that criteria when joining those two tables, then putting it in the join is the most logical.
For Example:
FROM
table1 t1
JOIN table2 t2_ABC ON
t1.c1 = t2_ABC.c1 AND
t2_ABC.c2 = 'ABC'
If you need to get a second column out of table 2 you just copy that block into Notepad, search/repalce "ABC" and presto and entire new block of code ready to paste back in.
Additional:
It's also easier to change between an inner and outer join without having to worry about any criteria that may be floating around in the WHERE clause.
I reserve the WHERE clause strictly for run-time criteria where possible.
As for efficiency:
If you're referring to excecution speed, then as everyone else has stated, it's redundant.
If you're referring to easier debugging and reuse, then I prefer option 1.
One thing I want to say finally as I notified, before that..
Both ways may give the same performance or using the criteria at Where clause may be little faster as found in some answers..
But I identified one difference, you can use for your logical needs..
Using the criteria at ON clause will not filter/skip the rows to select instead the join columns would be null based on the conditions
Using the criteria at Where clause may filter/skip the rows at the entire results
I don't think you'll find a finite answer for this one that applies to all cases. The 2 are not always interchangeable - since for some queries (some left joins) you will come up with different results by placing the criteria in the WHERE vs the FROM line.
In your case, you should evaluate both of these queries. In SSMS, you can view the estimated and actual execution plans of both of these queries - that would be a good first step in determining which is more optimal. You could also view the time & IO for each (set statistics time on, set statistics io on) - and that will also give you information to make your decision.
In the case of the queries in your question - I'd bet that they'll both come out with the same query plan - so in this case it may not matter, but in others it could potentially produce different plans.
Try this to see the difference between the 2...
SET STATISTICS IO ON
SET STATISTICS TIME ON
select salesman.salesmanid,
max(sales.quantity)
from salesmaninner join sales on salesman.salesmanid =sales.salesmanid
and sales.salesdate < salesman.promotiondate
group by salesman.salesmanid
select salesman.salesmanid,
max(sales.quantity)
from salesmaninner join sales on salesman.salesmanid = sales.salesmanid
where sales.salesdate < salesman.promotiondate
group by salesman.salesmanid
SET STATISTICS TIME OFF
SET STATISTICS IO OFF
It may seem flippant, but the answer is whichever query for which the query analyzer produces the most efficient plan.
To my mind, they seem to be equivalent, so the query analyzer may well produce identical plans, but you'd have to test.
Neither is more efficient, using the WHERE method is considered the old way to do so (http://msdn.microsoft.com/en-us/library/ms190014.aspx). YOu can look at the execution plan and see they do the same thing.
Become familiar with the Estimated Execution Plan in SQL Management Studio!! Like others have said, you're at the mercy of the analyzer no matter what you do so trust its estimates. I would guess the two you provided would produce the exact same plan.
If it's an attempt to change a development culture, pick the one that gives you a better plan; for the ones that are identical, follow the culture
I've commented this on other "efficiency" posts like this one (it's both sincere and sarcastic) -- if this is where your bottlenecks reside, then high-five to you and your team.
Case 1 (criteria in the JOIN) is better for encapsulation, and increased encapsulation is usually a good thing: decreased copy/paste omissions to another query, decreased bugs if later converted to LEFT JOIN, and increased readability (related stuff together and less "noise" in WHERE clause). In this case, the WHERE clause only captures principal table criteria or criteria that spans multiple tables.