Generating select statement with joins from table information - sql

I've got a bunch of classes that describe database schema: Table,Field,ForeignKey.
Tables have ForeignKeys list and Fields list.
Now I would like to generate SELECT statement with all the joins that are described in ForeignKey instances.
The question is: is order of tables relevant for the query time? Another words - do I have to care or is it done automatically for me by the db engine?

is order of tables relevant for the query time? Another words - do I have to care or is it done automatically for me by the db engine?
To the optimizer, no -- it doesn't matter.
For sake of readability and maintainance, you might want to consider laying the FROM and JOIN clauses out in a manner that reads well. If only dealing with INNER joins, there's no issue but OUTER JOINS I generally define after the FROM clause and use LEFT JOIN syntax exclusively. But that's a matter of style & taste...

Related

Are joins the proper way to do cross table queries?

I have a few tables in their third normal form and I need to do some cross table queries to get the information I need.
I looked at joins but it seems like it will create a new table. Is this the proper way to perform such queries? Or should I just do nested queries ? I guess it might make sense if I have to do these queries alot? I'm really not sure how well optimize these operations are. I'm using the sequelize ORM and I'm not sure I see any clear solution.
It seems to me you are asking about joins vs subqueries. These are to some extent different. But let's start with a couple of points.
A join creates a new relvar, not a new table. A relvar is a variable standing in for the relation output by the join operation. It is transient (as opposed to a view which would be persistent).
Joins and subqueries are not always perfect substitutes. Sometimes you will need both.
Your query output is also a relvar.
The above being said, generally where possible I think joins are preferable. The major reason is that a SQL query that can be written using the structure below is far easier (as you master the language) to both understand and debug than most alternatives, and also subqueries in column lists necessarily perform badly:
SELECT [column_list]
FROM [initial_table]
[join list]
WHERE [filters]
GROUP BY [grouping list]
HAVING [post-aggregation filters]
LIMIT [limit and offset]
If your query fits the above structure then you can usually expect that specific kinds of problems will occur in logic in specific parts of the query. On the other hand, with subqueries, you have to check these independently.

correct query design? cross joins driving ad-hoc reporting interface

I'm hoping some of the more experienced database/dwh developers or DBAs can weigh in on this one:
My team is using OBIEE as a front-end tool to drive ad-hoc reporting being done by our business units.
There is a lot of latency when generating sets that are relatively small. We are facing ~1 hour to produce ~50k records.
I looked into one of the queries that is behaving this way, and I was surprised to find that all of the tables being referenced are being cross-joined, and then filters are being applied in the WHERE clause.
So, to illustrate, the queries tend to look like this:
SELECT ...
FROM tbl1
,tbl2
,tbl3
,tbl4
WHERE tbl1.col1 = tbl2.col1
and tbl3.col2 = tbl2.col2
and tbl4.col3 = tbl3.col3
instead of like this:
SELECT ...
FROM tbl1
INNER JOIN tbl2
ON tbl1.col1 = tbl2.col1
INNER JOIN tbl3
ON tbl3.col2 = tbl2.col2
INNER JOIN tbl4
ON tbl4.col3 = tbl3.col3
Now, from what I know about the order of query operations, the FROM clause gets performed before the WHERE clause, so the first example would perform much more slowly than the latter example. Am I correct (please answer only if you know the answer in the context of Oracle DB)? Unfortunately, I don't have the admin rights to run a trace against the 2 different versions of the query.
Is there a reason to set up the query the first way, related to how the OBIEE interface works? Remember, the query is the result of a user drag-and-dropping attributes into a sandbox, from a 'bank' of attributes. Selecting any combination of the attributes is supposed to generate output (if the data exists). The attributes come from many different tables. I don't have any experience in designing the mecahnism that generates the SQL based on this kind of ad-hoc attribute selection, so I don't know whether the query design in the first example is required to service this kind of reporting tool.
Don't worry, historically Oracle used the first notation for inner joins but later on adopted ANSI SQL standards.
The results in terms of performance and returned recordsets are exactly the same, the implicit 'comma' joins are not crossing resultset but effectively integrating the WHERE filters. If you doubt it, run an EXPLAIN SELECT command for both queries and you will see the forcasted algorithms will be identical.
Expanding this answer you may notice in the future the analogous notation (+) in place of outer joins. This answer will also stand correct in that context.
The real issue comes when both notations (implicit and explicit joins) are mixed in the same query. This would be asking for trouble big time, but I doubt you find such a case in OBIEE.
Those are inner joins, not cross joins, they just use the old syntax for doing it rather than ANSI as you were expecting.
Most join queries contain at least one join condition, either in the FROM clause or in the WHERE clause. (Oracle Documentation)
For a simple query such as in your example the execution should be exactly the same.
Where you have set outer joins (in the business model join) you will see OBI produce a query where the inner joins are made in the WHERE clause and the outer joins are done ANSI in the FROM statement – just to make things really hard to debug!
SELECT ...
FROM tbl1
,tbl2
,tbl3 left outer join
tbl4 on tbl3.col1 = tbl4.col2
WHERE tbl1.col1 = tbl2.col1
and tbl3.col2 = tbl2.col2
and tbl4.col3 = tbl3.col3

SQL Code Smells

Could you please list some of the bad practices in SQL, that novice people do?
I have found the use of "WHILE loop" in scenarios which could be resolved using set operations.
Another example is inserting data only if it does not exist. This can be achieved using LEFT OUTER JOIN. Some people go for "IF"
Any other thoughts?
Edit: What I am looking for is specific scenarios (as mentioned in the question) that could be achieved using SQL without using procedural constructs
Thanks
Lijo
Here are some I have seen:
Using cursors instead of equivalent (and faster) set operations (joins etc).
Dynamic SQL for everything.
Code that is open to SQL Injection attacks.
Full outer joins even when they are not needed.
Huge stored procedures (hundreds/thousands of lines).
No comments.
Placing ODBC or dynamic SQL calls all over the code.
Often it is better to define a data abstraction layer that provides access
to the databases. All the SQL code can hide in that layer.
This often avoids replication of similar queries, and makes changing
data models easier to do.
Personally for me: anything that is not a plain INSERT, UPDATE, DELETE or SELECT statement
I don't like logic in SQL.
My biggest beef here is definitely repetitive SQL. As an example, multiple stored procedures that perform the exact same joins but different filters.
Using Views in such cases can make your database MUCH easier to look at and work with
Creating vendor-specific SQL, when generic SQL would do.
Creating tables dynamically at runtime (other than TEMPORARY tables).
Letting your application code have table create or super user privs.
The question asking for a list of SQL smells, no answer can be
exhaustive. I will be expanding my answer as time permits and memory
serves:
Redundant grouping
Redundant grouping is the application of the GROUP BY statement—and
consequently of aggregate functions—to more columns than required. It
occurs when the author starts by collecting most of or all the data that
he needs, to group it at the very end. Redundant grouping is,
therefore, late grouping, for the correct approach is to group early,
and only the data that needs grouping.
If a main entity (main) have a journal (jrnl) and refer to another
enity appendage (apnd), then the following query:
SELECT
main . Id ,
main . Name ,
MAX(jrnl.Entry) AS Entry ,
MAX(jrnl.Date ) AS Date ,
apnd . Reference,
apnd . Status
FROM main
JOIN jrnl ON jrnl.Parent = main.Id
JOIN apnd ON apnd.Id = main.ApndId
GROUP BY main.Id, main.Name, apnd.Reference, apnd.Status
has redundant grouping, because the sole purpose of the GROUP BY
clause is to obtain the latest journal entry. It should be rewritten in
a non-redundant mannger as follows:
SELECT
main.Id ,
main.Name ,
skel.Entry ,
jrnl.Date ,
apnd.Reference,
apnd.Status
FROM
( SELECT
jrnl.Parent AS MainId,
MAX(jrnl.Entry) AS MaxEntry
FROM main
GROUP BY jrnl.Parent
) skel -- eton
JOIN main ON main.Id = skel.MainId
JOIN jrnl ON jrnl.Entry = skel.MaxEntry
JOIN apnd ON apnd.Id = main.ApndId
That is—we group on the narrowest dataset possible, and join the rest
afterwards, even if it means referencing the same tables!

INNER JOIN keywords | with and without using them

SELECT * FROM TableA
INNER JOIN TableB
ON TableA.name = TableB.name
SELECT * FROM TableA, TableB
where TableA.name = TableB.name
Which is the preferred way and why?
Will there be any performance difference when keywords like JOIN is used?
Thanks
The second way is the classical way of doing it, from before the join keyword existed.
Normally the query processor generates the same database operations from the two queries, so there would be no difference in performance.
Using join better describes what you are doing in the query. If you have many joins, it's also better because the joined table and it's condition are beside each other, instead of putting all tables in one place and all conditions in another.
Another aspect is that it's easier to do an unbounded join by mistake using the second way, resulting in a cross join containing all combinations from the two tables.
Use the first one, as it is:
More explicit
Is the Standard way
As for performance - there should be no difference.
find out by using EXPLAIN SELECT …
it depends on the engine used, on the query optimizer, on the keys, on the table; on pretty much everything
In some SQL engines the second form (associative joins) is depreicated. Use the first form.
Second is less explicit, causes begginers to SQL to pause when writing code. Is much more difficult to manage in complex SQL due to the sequence of the join match requirement to match the WHERE clause sequence - they (squence in the code) must match or the results returned will change making the returned data set change which really goes against the thought that sequence should not change the results when elements at the same level are considered.
When joins containing multiple tables are created, it gets REALLY difficult to code, quite fast using the second form.
EDIT: Performance: I consider coding, debugging ease part of personal performance, thus ease of edit/debug/maintenance is better performant using the first form - it just takes me less time to do/understand stuff during the development and maintenance cycles.
Most current databases will optimize both of those queries into the exact same execution plan. However, use the first syntax, it is the current standard. By learning and using this join syntax, it will help when you do queries with LEFT OUTER JOIN and RIGHT OUTER JOIN. which become tricky and problematic using the older syntax with the joins in the WHERE clause.
Filtering joins solely using WHERE can be extremely inefficient in some common scenarios. For example:
SELECT * FROM people p, companies c WHERE p.companyID = c.id AND p.firstName = 'Daniel'
Most databases will execute this query quite literally, first taking the Cartesian product of the people and companies tables and then filtering by those which have matching companyID and id fields. While the fully-unconstrained product does not exist anywhere but in memory and then only for a moment, its calculation does take some time.
A better approach is to group the constraints with the JOINs where relevant. This is not only subjectively easier to read but also far more efficient. Thusly:
SELECT * FROM people p JOIN companies c ON p.companyID = c.id
WHERE p.firstName = 'Daniel'
It's a little longer, but the database is able to look at the ON clause and use it to compute the fully-constrained JOIN directly, rather than starting with everything and then limiting down. This is faster to compute (especially with large data sets and/or many-table joins) and requires less memory.
I change every query I see which uses the "comma JOIN" syntax. In my opinion, the only purpose for its existence is conciseness. Considering the performance impact, I don't think this is a compelling reason.

In MySQL queries, why use join instead of where?

It seems like to combine two or more tables, we can either use join or where. What are the advantages of one over the other?
Any query involving more than one table requires some form of association to link the results from table "A" to table "B". The traditional (ANSI-89) means of doing this is to:
List the tables involved in a comma separated list in the FROM clause
Write the association between the tables in the WHERE clause
SELECT *
FROM TABLE_A a,
TABLE_B b
WHERE a.id = b.id
Here's the query re-written using ANSI-92 JOIN syntax:
SELECT *
FROM TABLE_A a
JOIN TABLE_B b ON b.id = a.id
From a Performance Perspective:
Where supported (Oracle 9i+, PostgreSQL 7.2+, MySQL 3.23+, SQL Server 2000+), there is no performance benefit to using either syntax over the other. The optimizer sees them as the same query. But more complex queries can benefit from using ANSI-92 syntax:
Ability to control JOIN order - the order which tables are scanned
Ability to apply filter criteria on a table prior to joining
From a Maintenance Perspective:
There are numerous reasons to use ANSI-92 JOIN syntax over ANSI-89:
More readable, as the JOIN criteria is separate from the WHERE clause
Less likely to miss JOIN criteria
Consistent syntax support for JOIN types other than INNER, making queries easy to use on other databases
WHERE clause only serves as filtration of the cartesian product of the tables joined
From a Design Perspective:
ANSI-92 JOIN syntax is pattern, not anti-pattern:
The purpose of the query is more obvious; the columns used by the application is clear
It follows the modularity rule about using strict typing whenever possible. Explicit is almost universally better.
Conclusion
Short of familiarity and/or comfort, I don't see any benefit to continuing to use the ANSI-89 WHERE clause instead of the ANSI-92 JOIN syntax. Some might complain that ANSI-92 syntax is more verbose, but that's what makes it explicit. The more explicit, the easier it is to understand and maintain.
These are the problems with using the where syntax (other wise known as the implicit join):
First, it is all too easy to get accidental cross joins because the join conditions are not right next to the table names. If you have 6 tables being joined together, it is easy to miss one in the where clause. You will see this fixed all too often by using the distinct keyword. This is ahuge performance hit for the database. You can't get an accidental cross join using the explicit join syntax as it will fail the syntax check.
Right and left joins are problematic (In SQl server you are not guaranteed to get the correct results) in the old syntax in some databases. Further they are deprecated in SQL Server I know.
If you intend to use a cross join, that is not clear from the old syntax. It is clear using the current ANSII standard.
It is much harder for the maintainer to see exactly which fields are part of the join or even which tables join together in what order using the implicit syntax. This means it might take more time to revise the queries. I have known very few people who, once they took the time to feel comfortable with the explicit join syntax, ever went back to the old way.
I've also noticed that some people who use these implicit joins don't actually understand how joins work and thus are getting incorrect results in their queries.
Honestly, would you use any other kind of code that was replaced with a better method 18 years ago?
Most people tend to find the JOIN syntax a bit clearer as to what is being joined to what. Additionally, it has the benefit of being a standard.
Personally, I "grew up" on WHEREs, but the more I use the JOIN syntax the more I'm starting to see how it's more clear.
Explicit joins convey intent, leaving the where clause to do the filtering. It is cleaner and it is standard, and you can do things such as left outer or right outer which is harder to do only with where.
You can't use WHERE to combine two tables. What you can do though is to write:
SELECT * FROM A, B
WHERE ...
The comma here is equivalent to writing:
SELECT *
FROM A
CROSS JOIN B
WHERE ...
Would you write that? No - because it's not what you mean at all. You don't want a cross join, you want an INNER JOIN. But when you write comma, you're saying CROSS JOIN and that's confusing.
Actually you often need both "WHERE" and "JOIN".
"JOIN" is used to retrieve data from two tables - based ON the values of a common column. If you then want to further filter this result, use the WHERE clause.
For example, "LEFT JOIN" retrieves ALL rows from the left table, plus the matching rows from the right table. But that does not filter the records on any specific value or on other columns that are not part of the JOIN. Thus, if you want to further filter this result, specify the extra filters in the WHERE clause.