The following code will work to select data from two tables:
SELECT t1.foo, t2.bar FROM TABLE1 t1 INNER JOIN TABLE2 t2 ON t1.foo=t2.foo
I could just as easily written
SELECT t2.foo, t2.bar FROM TABLE1 t1 INNER JOIN TABLE2 t2 ON t1.foo=t2.foo
t1.foo or t2.foo: six of one or half a dozen of the other. Why not just foo?
I've been wonder why doesn't the SQL server just automatically return the data without me specifying one table or the other since the choice is entirely arbitrary (as far as I can tell).
I can make up a scenario where you would need to specify the table, such as
SELECT t1.foo, t2.bar FROM TABLE1 t1 INNER JOIN TABLE t2 ON t1.foo+=t2.foo
However, such scenarios are far from the norm in my experience.
Can anyone enlighten me as to why the language is designed so that I have to make this seemingly arbitrary decision in my code?
Because equality in MS SQL doesn't necessarily mean they are equal in the values you want. Consider the following 2 values for foo: "Bar", "baR". Sql will believe them to be equal with respect to the join because of the case insensitivity inherent in the comparison, but which one were you asking for? SQL Server doesn't know, and it can't guess. You must explicitly tell it.
Edit:As #Lukas Eder brought up, not all implementations of SQL use case insensitive comparisons. I know MS SQL uses case insensitivity, and my answer is geared with this concept in mind.
Your reasoning is not quite true. While t1.foo = t2.foo may hold true, that doesn't mean they're the same. Some examples:
One could be VARCHAR(1) the other VARCHAR(2)
One could be VARCHAR(1) the other NUMBER(1)
t1 could be a simple table, whereas t2 is a view (or nested select) that makes hyper-complex calculations for the value of foo. The projection cost of either foo might not be the same in some RDBMS.
And there are dozens of other reasons, why it would be ambigous to just write foo
If you're sure that the columns represent the same thing you could join with a USING clause.
SELECT foo, t2.bar FROM TABLE1 t1 INNER JOIN TABLE2 t2 USING (foo);
Otherwise there's no guarantee that t1.foo is the same thing as t2.foo
In this case you have a INNER JOIN so it's clear that the decision is arbitrary. But there are many situations where even if you join on FOO the 2 are not the same.
EX: in case of a LEFT JOIN
OR in case you have something like ON t1.foo = t2.foo+/-/whater
The engine needs your input to know where to take the data from.
The reason you need to make this decision is that it isn't arbitrary. The system does not know which table has the data you want. You need to specify it. When the system designs the execution plan, it does not figure out which columns contain the same data in both tables. As far as it is concerned, these two columns could have different data. It isn't going to extrapolate that since you are saying these columns are equal that it could display either column when one isn't specified.
In that particular case, t1.foo and t2.foo are the same thing, but the engine isn't optimized for that (and would be confusing if it was). What if your join did something where they may not be the same, like this?
SELECT t2.foo, t2.bar FROM TABLE1 t1 INNER JOIN TABLE2 t2 ON t1.foo<t2.foo
Since we are using <, foo on t1 and t2 could be very different things. The engine can't "guess" in this case.
Just because those experiences are "far from the norm" in your experience, the engine has to allow for it, otherwise it would make some types of queries extremely difficult to write.
SQL doesn't do it because it simply doesn't resolve ambiguities. (But as you note they are equivalent.)
For application lifecycle it's ultimately better to resolve them yourself, because if a column changes name or the join type changes, your code is less likely to be broken and it's more obvious what your intentions were. But those benefits weren't intentional, I'm sure.
Related
Problem
I need to better understand the rules about when I can reference an outer table in a subquery and when (and why) that is an inappropriate request. I've discovered a duplication in an Oracle SQL query I'm trying to refactor but I'm running into issues when I try and turn my referenced table into a grouped subQuery.
The following statement works appropriately:
SELECT t1.*
FROM table1 t1,
INNER JOIN table2 t2
on t1.id = t2.id
and t2.date = (SELECT max(date)
FROM table2
WHERE id = t1.id) --This subquery has access to t1
Unfortunately table2 sometimes has duplicate records so I need to aggregate t2 first before I join it to t1. However when I try and wrap it in a subquery to accomplish this operation, suddenly the SQL engine can't recognize the outer table any longer.
SELECT t1.*
FROM table1 t1,
INNER JOIN (SELECT *
FROM table2 t2
WHERE t1.id = t2.id --This loses access to t1
and t2.date = (SELECT max(date)
FROM table2
WHERE id = t1.id)) sub on t1.id = sub.id
--Subquery loses access to t1
I know these are fundamentally different queries I'm asking the compiler to put together but I'm not seeing why the one would work but not the other.
I know I can duplicate the table references in my subquery and effectively detach my subquery from the outer table but that seems like a really ugly way of accomplishing this task (what with all the duplication of code and processing).
Helpful References
I found this fantastic description of the order in which clauses are executed in SQL Server: (INNER JOIN ON vs WHERE clause). I'm using Oracle but I would think that this would be standard across the board. There is a clear order to clause evaluation (with FROM being first) so I would think that any clause occuring further down the list would have access to all information previously processed. I can only assume my 2nd query somehow changes that ordering so that my subquery is being evaluated too early?
In addition, I found a similar question asked (Referencing outer query's tables in a subquery
) but while the input was good they never really explained why he couldn't do what he is doing and just gave alternative solutions to his problem. I've tried their alternate solutions but it's causing me other issues. Namely, that subquery with the date reference is fundamental to the entire operation so I can't get rid of it.
Questions
I want to understand what I've done here... Why can my initial subquery see the outer table but not after I wrap the entire statement in a subquery?
That said, if what I'm trying to do can't be done, what is the best way of refactoring the first query to eliminate the duplication? Should I reference table1 twice (with all the duplication that requires)? Or is there (probably) a better way of tackling this problem?
Thanks in advance!
------EDIT------
As some have surmised these queries above are not the actually query I'm refactoring but an example of the problem I'm running into. The query I'm working with is a lot more complicated so I'm hesitant to post it here as I'm afraid it will get people off track.
------UPDATE------
So I ran this by a fellow developer and he had one possible explanation for why my subquery is losing access to t1. Because I'm wrapping this subquery in a parenthesis, he thinks that this subquery is being evaluated before my table t1 is being evaluated. This would definitely explain the 'ORA-00904: "t1"."id": invalid identifier' error I've been receiving. It would also suggest that like arithmetic order of operations, that adding parens to a statement gives it priority within certain clause evaluations. I would still love for an expert to weigh in if they agree/disagree that is a logical explanation for what I'm seeing here.
So I figured this out based on the comment that Martin Smith made above (THANKS MARTIN!) and I wanted to make sure I shared my discovery for anyone else who trips across this issue.
Technical Considerations
Firstly, it would certainly help if I used the proper terminology to describe my problem: My first statement above uses a correlated subquery:
http://en.wikipedia.org/wiki/Correlated_subquery
http://www.programmerinterview.com/index.php/database-sql/correlated-vs-uncorrelated-subquery/
This is actually a fairly inefficient way of pulling back data as it reruns the subquery for every line in the outer table. For this reason I'm going to look for ways of eliminating these type of subqueries in my code:
https://blogs.oracle.com/optimizer/entry/optimizer_transformations_subquery_unesting_part_1
My second statement on the other hand was using what is called an inline view in Oracle also known as a derived table in SQL Server:
http://docs.oracle.com/cd/B19306_01/server.102/b14200/queries007.htm
http://www.programmerinterview.com/index.php/database-sql/derived-table-vs-subquery/
An inline view / derived table creates a temporary unnamed view at the beginning of your query and then treats it like another table until the operation is complete. Because the compiler needs to create a temporary view when it sees on of these subqueries on the FROM line, those subqueries must be entirely self-contained with no references outside the subquery.
Why what I was doing was stupid
What I was trying to do in that second table was essentially create a view based on an ambiguous reference to another table that was outside the knowledge of my statement. It would be like trying to reference a field in a table that you hadn't explicitly stated in your query.
Workaround
Lastly, it's worth noting that Martin suggested a fairly clever but ultimately inefficient way to accomplish what I was trying to do. The Apply statement is a proprietary SQL Server function but it allows you to talk to objects outside of your derived table:
http://technet.microsoft.com/en-us/library/ms175156(v=SQL.105).aspx
Likewise this functionality is available in Oracle through different syntax:
What is the equivalent of SQL Server APPLY in Oracle?
Ultimately I'm going to re-evaluate my entire approach to this query which means I'll have to rebuild it from scratch (believe it or not I didn't create this monstrocity originally - I swear!). A big thanks to everyone who commented - this was definitely stumping me but all of the input helped put me on the right track!
How about the following query:
SELECT t1.* FROM
(
SELECT *
FROM
(
SELECT t2.id,
RANK() OVER (PARTITION BY t2.id, t2.date ORDER BY t2.date DESC) AS R
FROM table2 t2
)
WHERE R = 1
) sub
INNER JOIN table1 t1
ON t1.id = sub.id
In your second example you are trying to pass the t1 reference down 2 levels.. you can't do that, you can only pass it down 1 level (which is why the 1st works). If you give a better example of what you are trying to do, we can help you rewrite your query as well.
I have an insert-select statement that needs to only insert rows where a particular identifier of the row does not exist in either of two other tables. Which of the following would be faster?
INSERT INTO Table1 (...)
SELECT (...) FROM Table2 t2
WHERE ...
AND NOT EXISTS (SELECT 'Y' from Table3 t3 where t2.SomeFK = t3.RefToSameFK)
AND NOT EXISTS (SELECT 'Y' from Table4 t4 where t2.SomeFK = t4.RefToSameFK AND ...)
... or...
INSERT INTO Table1 (...)
SELECT (...) FROM Table2 t2
WHERE ...
AND t2.SomeFK NOT IN (SELECT RefToSameFK from Table3)
AND t2.SomeFK NOT IN (SELECT RefToSameFK from Table4 WHERE ...)
... or do they perform about the same? Additionally, is there any other way to structure this query that would be preferable? I generally dislike subqueries as they add another "dimension" to the query that increases runtime by polynomial factors.
Usually it does not matter if NOT IN is slower / faster than NOT EXISTS, because they are NOT equivalent in presence of NULL. Read:
NOT IN vs NOT EXISTS
In these cases you almost always want NOT EXISTS, because it has the usually expected behaviour.
If they are equivalent, it is likely that your database already has figured that out and will generate the same execution plan for both.
In the few cases where both options are aquivalent and your database is not able to figure that out, it is better to analyze both execution plans and choose the best options for your specific case.
You could use a LEFT OUTER JOIN and check if the value in the RIGHT table is NULL. If the value is NULL, the row doesn't exist. That is one way to avoid subqueries.
SELECT (...) FROM Table2 t2
LEFT OUTER JOIN t3 ON (t2.someFk = t3.ref)
WHERE t3.someField IS NULL
It's dependent on the size of the tables, the available indices, and the cardinality of those indices.
If you don't get the same execution plan for both queries, and if neither query plans out to perform a JOIN instead of a sub query, then I would guess that version two is faster. Version one is correlated and therefore would produce many more sub queries, version two can be satisfied with three queries total.
(Also, note that different engines may be biased in one direction or another. Some engines may correctly determine that the queries are the same (if they really are the same) and resolve to the same execution plan.)
For bigger tables, it's recomended to use NOT EXISTS/EXISTS, because the IN clause runs the subquery a lot of times depending of the architecture of the tables.
Based on cost optimizer:
There is no difference.
I am reading about NATURAL shorthand form for SQL joins and I see some traps:
it just takes automatically all same named column-pairs (use USING to specify explicit column list)
if some new column is added, then join output can be "unexpectedly" changed too, which may be not so obvious (even if you know how NATURAL works) in complicated structures
NATURAL JOIN syntax is anti-pattern:
The purpose of the query is less obvious;
the columns used by the application is not clear
the columns used can change "unexpectedly"
The syntax goes against the modularity rule, about using strict typing whenever possible. Explicit is almost universally better.
Because of this, I don't recommend the syntax in any environment.
I also don't recommend mixing syntax (IE: using both NATURAL JOIN and explicit INNER/OUTER JOIN syntax) - keep a consistent codebase format.
These "traps", which seem to argue against natural joins, cut both ways. Suppose you add a new column to table A, fully expecting it to be used in joining with table B. If you know that every join of A and B is a natural join, then you're done. If every join explicitly uses USING, then you have to track them all down and change them. Miss one and there's a bug.
Use NATURAL joins when the semantics of the tables suggests that this is the right thing to do. Use explicit join criteria when you want to make sure the join is done in a specific way, regardless of how the table definitions might evolve.
One thing that completely destroys NATURAL for me is that most of my tables have an id column, which are obviously semantically all different. You could argue that having a user_id makes more sense than id, but then you end up writing things like user.user_id, a violation of DRY. Also, by the same logic, you would also have columns like user_first_name, user_last_name, user_age... (which also kind of makes sense in view that it would be different from, for example, session_age)... The horror.
I'll stick to my JOIN ... ON ..., thankyouverymuch. :)
I agree with the other posters that an explicit join should be used for reasons of clarity and also to easily allow a switch to an "OUTER" join should your requirements change.
However most of your "traps" have nothing to do with joins but rather the evils of using "SELECT *" instead of explicitly naming the columns you require "SELECT a.col1, a.col2, b.col1, b.col2". These traps occurs whenever a wildcard column list is used.
Adding an extra reason not listed in any of the answers above. In postgres (not sure if this the case for other databases) if no column names are found in common between the two tables when using NATURAL JOIN then a CROSS JOIN is performed. This means that if you had an existing query and then you were to subsequently change one of the column names in a table, you would still get a set of rows returned from the query rather than an error. If instead you used the JOIN ... USING(...) syntax you would get an error if the joining column was no longer there.
The postgres documentation has a note to this effect:
Note: USING is reasonably safe from column changes in the joined relations since only the listed columns are combined. NATURAL is considerably more risky since any schema changes to either relation that cause a new matching column name to be present will cause the join to combine that new column as well.
Do you mean the syntax like this:
SELECT *
FROM t1, t2, t3 ON t1.id = t2.id
AND t2.id = t3.id
Versus this:
SELECT *
FROM t1
LEFT OUTER JOIN t2 ON t1.id = t2.id
AND t2.id = t3.id
I prefer the 2nd syntax and also format it differently:
SELECT *
FROM T1
LEFT OUTER JOIN T2 ON T2.id = T1.id
LEFT OUTER JOIN T3 ON T3.id = T2.id
In this case, it is very clear what tables I am joining and what ON clause I am using to join them. By using that first syntax is just too easy to not put in the proper JOIN and get a huge result set. I do this because I am prone to typos, and this is my insurance against that. Plus, it is visually easier to debug.
Provided that the tables could essentially be inner joined, since the where clause excludes all records that don't match, just exactly how bad is it to use the first of the following 2 query statement syntax styles:
SELECT {COLUMN LIST}
FROM TABLE1 t1, TABLE2 t2, TABLE3 t3, TABLE4 t4 (etc)
WHERE t1.uid = t2.foreignid
AND t2.uid = t3.foreignid
AND t3.uid = t4.foreignid
etc
instead of
SELECT {COLUMN LIST}
FROM TABLE1 t1
INNER JOIN TABLE2 t2 ON t1.uid = t2.foreignid
INNER JOIN TABLE3 t3 ON t2.uid = t3.foreignid
INNER JOIN TABLE4 t4 ON t3.uid = t4.foreignid
I'm not sure if this is limited to microsoft SQL, or even a particular version, but my understanding is that the first scenario does a full outer join to make all possible correlations accessible.
I've used the first approach in the past to optimise queries that access two significantly large stores of data that each have peripheral table joined to them, with the product of those joins coming together late in the query. By allowing each of the "larger" table to join to their respective lookup tables, and only combining a specific subset of each of the larger tables, I found that there were notable speed improvements over introducing the large tables to each other prior to specific filtering.
Under normal (simple joins) circumstance, would it not be far better to use the second scenario? I find it to be more easily readable and it seems like it'll be much faster.
INNER JOIN ON vs WHERE clause
Maybe the best way to answer this is to take a look at how the database handles the query internally. If you're on SQL Server, use Profiler to see how many reads etc. each query takes and the query plan to see what route is being taken through the data. Statistics, skewing etc. will also most likely play a role.
The first query doesn't produce a full OUTER join (which is the union of both LEFT and RIGHT joins). Essentially unless there are some [internal] SQL parser - specific optimizations, both queries are equal.
Personally I would never use the first syntax. It may be the same performancewise but it is harder to maintain and far more subject to accidental cross joins when things get complex. If you miss an ON condition, it will fail the syntax check , if you miss one of the WHERE conditions that is the equivalent of an ON condition, it will happily do a cross join. It is also a syntax that is 17 years out of date for goodness sakes!
Further, the left and right join syntax in the old syntax are broken in SQL Server and do NOT always return the correct results (it can sometimes interpet the results as a corss join instead of an outerjoin) and they have been deprecated and will not be useable at all in the next version. If you need to change one of the queries to use an outer join, then you can be looikng at a major rewrite as it is especially bad to try to mix the two kinds of syntax.
Given these two queries:
Select t1.id, t2.companyName
from table1 t1
INNER JOIN table2 t2 on t2.id = t1.fkId
WHERE t2.aField <> 'C'
OR:
Select t1.id, t2.companyName
from table1 t1
INNER JOIN table2 t2 on t2.id = t1.fkId and t2.aField <> 'C'
Is there a demonstrable difference between the two? Seems to me that the clause "t2.aField <> 'C'" will run on every row in t2 that meets the join criteria regardless. Am I incorrect?
Update: I did an "Include Actual Execution Plan" in SQL Server. The two queries were identical.
I prefer to use the Join criteria for explaining how the tables are joined together.
So I would place the additional clause in the where section.
I hope (although I have no stats), that SQL Server would be clever enough to find the optimal query plan regardless of the syntax you use.
HOWEVER, if you have indexes which also have id, and aField in them, I would suggest placing them together in the inner join criteria.
It would be interesting to see the query plan's in these 2 (or 3) scenarios, and see what happens. Nice question.
There is a difference. You should do an EXPLAIN PLAN for both of the selects and see it in detail.
As for a simplier explanation:
The WHERE clause gets executed only after the joining of the two tables, so it executes for each row returned from the join and not nececerally every one from table2.
Performance wise its best to eliminate unwanted results early on so there should be less rows for joins, where clauses or other operations to deal with later on.
In the second example, there are 2 columns that have to be same for the rows to be joined together so it usually will give different results than the first one.
It depends.
SELECT
t1.foo,
t2.bar
FROM
table1 t1
LEFT JOIN table2 t2 ON t1.SomeId = t2.SomeId
WHERE
t2.SomeValue IS NULL
is different from
SELECT
t1.foo,
t2.bar
FROM
table1 t1
LEFT JOIN table2 t2 ON t1.SomeId = t2.SomeId AND t2.SomeValue IS NULL
It is different because the former crosses out all records from t2 that have NULL in t2.SomeValue and those from t1 that are not referenced in t2. The latter crosses out only the t2 records that have NULL in t2.SomeValue.
Just use the ON clause for the join condition and the WHERE clause for the filter.
Unless moving the join condition to the where clause changes the meaning of the query (like in the left join example above), then it doesn't matter where you put them. SQL will re-arrange them, and as long as they are provably equivalent, you'll get the same query.
That being said, I think it's more of a logical / readability thing. I usually put anything that relates two tables in the join, and anything that filters in the where.
I'd prefer first query. SQL server will use the best join type for your query based on indexes you have, after that will apply WHERE clause. But you can run both queries at the same time, look at execution plans, compare and choose the fastest (optimize adding indexes also).
unless you are working on a single-user app or something similarly small that creates trivial load, the only considerations that mean anything is how the server will process your query.
The answers that mention query plans give good advice.
In addition, set io statistics on to get an idea of how many reads your query will generate (I especially love Azder's post).
Think of every DB server as a pump of data from disk to client. That pump goes faster if it performs only the IO needed to get the job done. If the data is in cache it will be even faster. But you don't want to be reading more than you need from disk - that will result in crowding out of your cache useful data for no good reason.