we are using SAP HANA 1.0 SPS12. AS INTERSECT,MINUS,EXCEPT nodes are not available in graphical mode. We need to rely on LEFT OUTER JOIN or use below method-
https://blogs.sap.com/2014/03/02/thinking-in-hana-part-1-set-operators/
I have tables of volumes having 1 billion rows.
Can anyone suggest which method is better using LEFT OUTER JOIN or realising INTERSECT through UNION or using scripted view with INTERSECT operator from performance point of view?
With HANA 1 SPS 12 it’s perfectly OK to use table functions as part of graphical models.
Trying to emulate complex query operations usually worsens both performance and maintainability.
If your project „doesn’t allow“ using table functions the answer must be: fix that rule instead of producing a twisted view logic.
As for the „outer join is faster“ - that’s not true per se. Inner joins have a stricter semantic in that they have to be executed in every case, whereas outer joins allow to avoid the actual computation of the join in cases where the result set won’t be impacted by this. That means, when the conditions are fulfilled, outer joins can simply be avoided, which is of course faster than execution a join.
Related
I have multiple SQL queries that I run one after the other to get a set of data. In each query, there are a bunch of tables joined that are exactly the same with the other queries. For example:
Query1
SELECT * FROM
Product1TableA A1
INNER JOIN Product1TableB B on A1.BId = B.Id
INNER JOIN CommonTable1 C on C.Id = B.CId
INNER JOIN CommonTable2 D on D.Id = B.DId
...
Query2
SELECT * FROM Product2TableA A2
INNER JOIN Product2TableB B on A2.BId = B.Id
INNER JOIN CommonTable1 C on C.Id = B.CId
INNER JOIN CommonTable2 D on D.Id = B.DId
...
I am playing around re-ordering the joins (around 2 dozen tables joined per query) and I read here that they should not really affect query execution unless SQL "gives up" during optimization because of how big the query is...
What I am wondering is if bunching up common table joins at the start of all my queries actually helps...
In theory, the order of the joins in the from clause doesn't make a difference on query performance. For a small number of tables, there should be no difference. The optimizer should find the best execution path.
For a larger number of tables, the optimizer may have to short-circuit its search regarding join order. It would then be using heuristics -- and these could be affected by join order.
Earlier queries would have no effect on a particular execution plan.
If you are having problems with performance, I am guessing that join order is not the root cause. The most common problem that I have in SQL Server are inappropriate nested-loop joins -- and these can be handled with an optimizer hint.
I think I understood what he was trying to say/to do:
What I am wondering is if bunching up common table joins at the start
of all my queries actually helps...
Imagine that you have some queries and every query has more than 3 inner joins. The queries are different but always have (for example) 3 tables in common that are joined on the same fields. Now the question is:
what will happen if every query will start with these 3 tables in join, and all the other tables are joined after?
The answer is it will change nothing, i.e. optimizer will rearrange the tables in the way it thinks will bring to optimal execution.
The thing may change if, for example, you save the result of these 3 joins into a temporary table and then use this saved result to join with other tables. But this depends on the filters that your queries use. If you have appropriate indexes and your query filters are selective enough(so that your query returns very few rows) there is no need to cache intermediate no-filtered result that has too many rows because optimizer can choose to first filter every table and only then to join them
Gordon's answer is a good explanation, but this answer explains the JOIN's behavior and also specifies that SQL Server's version is relevant:
Although the join order is changed in optimisation, the optimiser
does't try all possible join orders. It stops when it finds what it
considers a workable solution as the very act of optimisation uses
precious resources.
While the optimizer tries its best in choosing a good order for the JOINs, having many JOINs creates a bigger chance of obtaining a not so good plan.
Personally, I have seen many JOINs in some views within an ERP and they usually ran ok. However, from time to time (based on client's data volume, instance configuration etc.), some selects from these views took much more than expected.
If this data reaches an actual application (.NET, JAVA etc.), a way is to cache information from all small tables, store it as dictionaries (hashes) and perform O(1) lookups based on the keys.
This provides the advantages of reducing the JOIN count and not performing reads from the database for these tables (except once when caching data). However, this increases the complexity of the application (cache management).
Another solution is use temporary tables and populate them in multiple queries to avoid many JOINs per single query. This solution usually performs better and also increases debuggability (if the query does not provide the correct data or no data at all, which of the 10-15 JOINs is the problem?).
So, my answer to your question is: you might get some benefit from reordering the JOIN clauses, but I recommend avoiding lots of JOINs in the first place.
I'm hoping some of the more experienced database/dwh developers or DBAs can weigh in on this one:
My team is using OBIEE as a front-end tool to drive ad-hoc reporting being done by our business units.
There is a lot of latency when generating sets that are relatively small. We are facing ~1 hour to produce ~50k records.
I looked into one of the queries that is behaving this way, and I was surprised to find that all of the tables being referenced are being cross-joined, and then filters are being applied in the WHERE clause.
So, to illustrate, the queries tend to look like this:
SELECT ...
FROM tbl1
,tbl2
,tbl3
,tbl4
WHERE tbl1.col1 = tbl2.col1
and tbl3.col2 = tbl2.col2
and tbl4.col3 = tbl3.col3
instead of like this:
SELECT ...
FROM tbl1
INNER JOIN tbl2
ON tbl1.col1 = tbl2.col1
INNER JOIN tbl3
ON tbl3.col2 = tbl2.col2
INNER JOIN tbl4
ON tbl4.col3 = tbl3.col3
Now, from what I know about the order of query operations, the FROM clause gets performed before the WHERE clause, so the first example would perform much more slowly than the latter example. Am I correct (please answer only if you know the answer in the context of Oracle DB)? Unfortunately, I don't have the admin rights to run a trace against the 2 different versions of the query.
Is there a reason to set up the query the first way, related to how the OBIEE interface works? Remember, the query is the result of a user drag-and-dropping attributes into a sandbox, from a 'bank' of attributes. Selecting any combination of the attributes is supposed to generate output (if the data exists). The attributes come from many different tables. I don't have any experience in designing the mecahnism that generates the SQL based on this kind of ad-hoc attribute selection, so I don't know whether the query design in the first example is required to service this kind of reporting tool.
Don't worry, historically Oracle used the first notation for inner joins but later on adopted ANSI SQL standards.
The results in terms of performance and returned recordsets are exactly the same, the implicit 'comma' joins are not crossing resultset but effectively integrating the WHERE filters. If you doubt it, run an EXPLAIN SELECT command for both queries and you will see the forcasted algorithms will be identical.
Expanding this answer you may notice in the future the analogous notation (+) in place of outer joins. This answer will also stand correct in that context.
The real issue comes when both notations (implicit and explicit joins) are mixed in the same query. This would be asking for trouble big time, but I doubt you find such a case in OBIEE.
Those are inner joins, not cross joins, they just use the old syntax for doing it rather than ANSI as you were expecting.
Most join queries contain at least one join condition, either in the FROM clause or in the WHERE clause. (Oracle Documentation)
For a simple query such as in your example the execution should be exactly the same.
Where you have set outer joins (in the business model join) you will see OBI produce a query where the inner joins are made in the WHERE clause and the outer joins are done ANSI in the FROM statement – just to make things really hard to debug!
SELECT ...
FROM tbl1
,tbl2
,tbl3 left outer join
tbl4 on tbl3.col1 = tbl4.col2
WHERE tbl1.col1 = tbl2.col1
and tbl3.col2 = tbl2.col2
and tbl4.col3 = tbl3.col3
I am curious on how exactly LINQ (not LINQ to SQL) is performing is joins behind the scenes in relation to how Sql Server performs joins.
Sql Server before executing a query, generates an Execution Plan. The Execution Plan is basically an Expression Tree on what it believes is the best way to execute the query. Each node provides information on whether to do a Sort, Scan, Select, Join, ect.
On a 'Join' node in our execution plan, we can see three possible algorithms; Hash Join, Merge Join, and Nested Loops Join. Sql Server will choose which algorithm to for each Join operation based on expected number of rows in Inner and Outer tables, what type of join we are doing (some algorithms don't support all types of joins), whether we need data ordered, and probably many other factors.
Join Algorithms:
Nested Loop Join:
Best for small inputs, can be optimized with ordered inner table.
Merge Join:
Best for medium to large inputs sorted inputs, or an output that needs to be ordered.
Hash Join:
Best for medium to large inputs, can be parallelized to scale linearly.
LINQ Query:
DataTable firstTable, secondTable;
...
var rows = from firstRow in firstTable.AsEnumerable ()
join secondRow in secondTable.AsEnumerable ()
on firstRow.Field<object> (randomObject.Property)
equals secondRow.Field<object> (randomObject.Property)
select new {firstRow, secondRow};
SQL Query:
SELECT *
FROM firstTable fT
INNER JOIN secondTable sT ON fT.Property = sT.Property
Sql Server might use a Nested Loop Join if it knows there are a small number of rows from each table, a merge join if it knows one of the tables has an index, and Hash join if it knows there are a lot of rows on either table and neither has an index.
Does Linq choose its algorithm for joins? or does it always use one?
The methods on System.Linq.Enumerable are performed in the order they are issued. There is no query optimizer at play.
Many methods are very lazy, which allows you to not fully enumerate the source by putting .First or .Any or .Take at the end of the query. That is the easiest optimization to be had.
For System.Linq.Enumerable.Join specifically, the docs state that this is a hash join.
The default equality comparer, Default, is used to hash and compare keys.
So examples:
//hash join (n+m) Enumerable.Join
from a in theAs
join b in theBs on a.prop equals b.prop
//nestedloop join (n*m) Enumerable.SelectMany
from a in theAs
from b in theBs
where a.prop == b.prop
Linq to SQL does not send join hints to the server. Thus the performance of a join using Linq to SQL will be identical to the performance of the same join sent "directly" to the server (i.e. using pure ADO or SQL Server Management Studio) without any hints specified.
Linq to SQL also doesn't allow you to use join hints (as far as I know). So if you want to force a specific type of join, you'll have to do it using a stored procedure or the Execute[Command|Query] method. But unless you specify a join type by writing INNER [HASH|LOOP|MERGE] JOIN, then SQL Server always picks the type of join it thinks will be most efficient - it doesn't matter where the query came from.
Other Linq query providers - such as Entity Framework and NHibernate Linq - will do exactly the same thing as Linq to SQL. None of these have any direct knowledge of how you've indexed your database and so none of them send join hints.
Linq to Objects is a little different - it will (almost?) always perform a "hash join" in SQL Server parlance. That is because it lacks the indexes necessary to do a merge join, and hash joins are usually more efficient than nested loops, unless the number of elements is very small. But determining the number of elements in an IEnumerable<T> might require a full iteration in the first place, so in most cases it's faster just to assume the worst and use a hashing algorithm.
LINQ itself does not chose algorithms of any kind, as LINQ, strictly speaking, is simply a way of expressing a query in SQL-like syntax that can map to function calls on either IEnumerable<T> or IQueryable<T>. LINQ is entirely a language feature and does not provide functionality, only another way of expressing existing function calls.
In the case of IQueryable<T>, it's entirely up to the provider (such as LINQ to SQL) to chose the best method of producing the results.
In the case of LINQ to Objects (using IEnumerable<T>), simple enumeration is what's used (roughly equivalent to nested loops) in all cases. There is no deep inspection (or even knowledge of) the underlying data types in order to optimize the query.
SELECT * FROM TableA
INNER JOIN TableB
ON TableA.name = TableB.name
SELECT * FROM TableA, TableB
where TableA.name = TableB.name
Which is the preferred way and why?
Will there be any performance difference when keywords like JOIN is used?
Thanks
The second way is the classical way of doing it, from before the join keyword existed.
Normally the query processor generates the same database operations from the two queries, so there would be no difference in performance.
Using join better describes what you are doing in the query. If you have many joins, it's also better because the joined table and it's condition are beside each other, instead of putting all tables in one place and all conditions in another.
Another aspect is that it's easier to do an unbounded join by mistake using the second way, resulting in a cross join containing all combinations from the two tables.
Use the first one, as it is:
More explicit
Is the Standard way
As for performance - there should be no difference.
find out by using EXPLAIN SELECT …
it depends on the engine used, on the query optimizer, on the keys, on the table; on pretty much everything
In some SQL engines the second form (associative joins) is depreicated. Use the first form.
Second is less explicit, causes begginers to SQL to pause when writing code. Is much more difficult to manage in complex SQL due to the sequence of the join match requirement to match the WHERE clause sequence - they (squence in the code) must match or the results returned will change making the returned data set change which really goes against the thought that sequence should not change the results when elements at the same level are considered.
When joins containing multiple tables are created, it gets REALLY difficult to code, quite fast using the second form.
EDIT: Performance: I consider coding, debugging ease part of personal performance, thus ease of edit/debug/maintenance is better performant using the first form - it just takes me less time to do/understand stuff during the development and maintenance cycles.
Most current databases will optimize both of those queries into the exact same execution plan. However, use the first syntax, it is the current standard. By learning and using this join syntax, it will help when you do queries with LEFT OUTER JOIN and RIGHT OUTER JOIN. which become tricky and problematic using the older syntax with the joins in the WHERE clause.
Filtering joins solely using WHERE can be extremely inefficient in some common scenarios. For example:
SELECT * FROM people p, companies c WHERE p.companyID = c.id AND p.firstName = 'Daniel'
Most databases will execute this query quite literally, first taking the Cartesian product of the people and companies tables and then filtering by those which have matching companyID and id fields. While the fully-unconstrained product does not exist anywhere but in memory and then only for a moment, its calculation does take some time.
A better approach is to group the constraints with the JOINs where relevant. This is not only subjectively easier to read but also far more efficient. Thusly:
SELECT * FROM people p JOIN companies c ON p.companyID = c.id
WHERE p.firstName = 'Daniel'
It's a little longer, but the database is able to look at the ON clause and use it to compute the fully-constrained JOIN directly, rather than starting with everything and then limiting down. This is faster to compute (especially with large data sets and/or many-table joins) and requires less memory.
I change every query I see which uses the "comma JOIN" syntax. In my opinion, the only purpose for its existence is conciseness. Considering the performance impact, I don't think this is a compelling reason.
It seems like to combine two or more tables, we can either use join or where. What are the advantages of one over the other?
Any query involving more than one table requires some form of association to link the results from table "A" to table "B". The traditional (ANSI-89) means of doing this is to:
List the tables involved in a comma separated list in the FROM clause
Write the association between the tables in the WHERE clause
SELECT *
FROM TABLE_A a,
TABLE_B b
WHERE a.id = b.id
Here's the query re-written using ANSI-92 JOIN syntax:
SELECT *
FROM TABLE_A a
JOIN TABLE_B b ON b.id = a.id
From a Performance Perspective:
Where supported (Oracle 9i+, PostgreSQL 7.2+, MySQL 3.23+, SQL Server 2000+), there is no performance benefit to using either syntax over the other. The optimizer sees them as the same query. But more complex queries can benefit from using ANSI-92 syntax:
Ability to control JOIN order - the order which tables are scanned
Ability to apply filter criteria on a table prior to joining
From a Maintenance Perspective:
There are numerous reasons to use ANSI-92 JOIN syntax over ANSI-89:
More readable, as the JOIN criteria is separate from the WHERE clause
Less likely to miss JOIN criteria
Consistent syntax support for JOIN types other than INNER, making queries easy to use on other databases
WHERE clause only serves as filtration of the cartesian product of the tables joined
From a Design Perspective:
ANSI-92 JOIN syntax is pattern, not anti-pattern:
The purpose of the query is more obvious; the columns used by the application is clear
It follows the modularity rule about using strict typing whenever possible. Explicit is almost universally better.
Conclusion
Short of familiarity and/or comfort, I don't see any benefit to continuing to use the ANSI-89 WHERE clause instead of the ANSI-92 JOIN syntax. Some might complain that ANSI-92 syntax is more verbose, but that's what makes it explicit. The more explicit, the easier it is to understand and maintain.
These are the problems with using the where syntax (other wise known as the implicit join):
First, it is all too easy to get accidental cross joins because the join conditions are not right next to the table names. If you have 6 tables being joined together, it is easy to miss one in the where clause. You will see this fixed all too often by using the distinct keyword. This is ahuge performance hit for the database. You can't get an accidental cross join using the explicit join syntax as it will fail the syntax check.
Right and left joins are problematic (In SQl server you are not guaranteed to get the correct results) in the old syntax in some databases. Further they are deprecated in SQL Server I know.
If you intend to use a cross join, that is not clear from the old syntax. It is clear using the current ANSII standard.
It is much harder for the maintainer to see exactly which fields are part of the join or even which tables join together in what order using the implicit syntax. This means it might take more time to revise the queries. I have known very few people who, once they took the time to feel comfortable with the explicit join syntax, ever went back to the old way.
I've also noticed that some people who use these implicit joins don't actually understand how joins work and thus are getting incorrect results in their queries.
Honestly, would you use any other kind of code that was replaced with a better method 18 years ago?
Most people tend to find the JOIN syntax a bit clearer as to what is being joined to what. Additionally, it has the benefit of being a standard.
Personally, I "grew up" on WHEREs, but the more I use the JOIN syntax the more I'm starting to see how it's more clear.
Explicit joins convey intent, leaving the where clause to do the filtering. It is cleaner and it is standard, and you can do things such as left outer or right outer which is harder to do only with where.
You can't use WHERE to combine two tables. What you can do though is to write:
SELECT * FROM A, B
WHERE ...
The comma here is equivalent to writing:
SELECT *
FROM A
CROSS JOIN B
WHERE ...
Would you write that? No - because it's not what you mean at all. You don't want a cross join, you want an INNER JOIN. But when you write comma, you're saying CROSS JOIN and that's confusing.
Actually you often need both "WHERE" and "JOIN".
"JOIN" is used to retrieve data from two tables - based ON the values of a common column. If you then want to further filter this result, use the WHERE clause.
For example, "LEFT JOIN" retrieves ALL rows from the left table, plus the matching rows from the right table. But that does not filter the records on any specific value or on other columns that are not part of the JOIN. Thus, if you want to further filter this result, specify the extra filters in the WHERE clause.