Does the order of the columns in a SELECT statement make a difference? - sql

This question was inspired by a previous question posted on SO, "Does the order of the WHERE clause make a differnece?". Would it improve a SELECT statement's performance if the the columns used in the WHERE section are placed at the begining of the SELECT statement?
example:
SELECT customer.id,
transaction.id,
transaction.efective_date,
transaction.a,
[...]
FROM customer, transaction
WHERE customer.id = transaction.id;
I do know that limiting the list of columns to only the needed ones in a SELECT statement improves performance as opposed to using SELECT * because the current list is smaller.

For Oracle and Informix and any other self-respecting DBMS, the order of the columns should have no impact on performance. Similarly, it should be the case that the query engine finds the optimal order to process the Where clause so the order should not matter all things being equal (i.e., looking past constructs which might force an execution order).

Related

If you do a simple SELECT-WHERE on a CTE that is already sorted, are your results guaranteed to still be in that same order, just filtered?

Wondering about expected/deterministic ordering output from Oracle 11g for queries based on sorted CTEs.
Consider this (extremely-oversimplified for the sake of the) example SQL query. Again, note how the CTE has an ORDER BY clause in it.
WITH SortedArticles as (
SELECT. *
FROM Articles
ORDER BY DatePublished
)
SELECT *
FROM SortedArticles
WHERE Author = 'Joe';
Can it be assumed that the outputted rows are guaranteed to be in the same order as the CTE, or do I have to re-sort them a second time?
Again, this is an extremely over-simplified example but it contains the important parts of what I'm asking. They are...
The CTE is sorted
The final SELECT statement selects only against the CTE, nothing else (no joins, etc.), and
The final SELECT statement only specifies a WHERE clause. It is purely a filtering statement.
The short answer is no. The only way to guarantee ordering is with an ORDER BY clause on your outer query. But there is no need to sort the results in the CTE in that situation.
However, if the sort expression is complex, and you need sorting in the derived CTEs (e.g. because of using OFFSET/FETCH or ROWNUM), you could simplify the subsequent sorting by adding a row number field to the original CTE based on its sort criteria and then just sorting the derived CTEs by that row number. For your example:
WITH SortedArticles as (
SELECT *,
ROW_NUMBER() OVER (ORDER BY DatePublished) AS rn
FROM Articles
)
SELECT *
FROM SortedArticles
WHERE Author = 'Joe'
ORDER BY rn
No, the results are not guaranteed to be in the same order as in the subquery. Never was, never will be. You may observe a certain behaviour, especially if the CTE is materialized, which you can try to influence with optimizer hints like /*+ MATERIALIZE */ and /*+ INLINE */. However, the behaviour of the query optimizer depends also on data volume, IO v cpu speed, and most importantly on the database version. For instance, Oracle 12.2 introduces a feature called "In-Memory Cursor Duration Temp Table" that tries to speed up queries like yours, without preserving the order in the subquery.
I'd go along with #Nick's suggestion of adding a row number field in the subquery.

Splitting large table into 2 dataframes via JDBC connection in RStudio

Through R I connect to a remotely held database. The issue I have is my hardware isn't so great and the dataset contains tens of millions of rows with about 10 columns per table. When I run the below code, at the df step, I get a "Not enough RAM" error from R:
library(DatabaseConnector)
conn <- connect(connectionDetails)
df <- querySql(conn,"SELECT * FROM Table1")
What I thought about doing was splitting the tables into two parts any filter/analyse/combine as needed going forward. I think because I use the conn JDBC conection I have to use SQL syntax to make it work. With SQL, I start with the below code:
df <- querySql(conn,"SELECT TOP 5000000 FROM Table1")
And then where I get stuck is how do I create a second dataframe starting with n - 5000000 rows and ending at the final row, retrieved from Table1.
I'm open to suggestions but I think there are two potential answers to this question. The first is to work within the querySql to get it working. The second is to use an R function other than querySql (no idea what this would look like). I'm limited to R due to work environment.
The SQL statement
SELECT TOP 5000000 * from Table1
is not doing what you think it's doing.
Relational tables are conceptually unordered.
A relation is defined as a set of n-tuples. In both mathematics and the relational database model, a set is an unordered collection of unique, non-duplicated items, although some DBMSs impose an order to their data.
Selecting from a table produces a result-set. Result-sets are also conceptually unordered unless and until you explicitly specify an order for them, which is generally done using an order by clause.
When you use a top (or limit, depending on the DBMS) clause to reduce the number of records to be returned by a query (let's call these the "returned records") below the number of records that could be returned by that query (let's call these the "selected records") and if you have not specified an order by clause, then it is conceptually unpredictable and random which of the selected records will be chosen as the returned records.
Since you have not specified an order by clause in your query, you are effectively getting 5,000,000 unpredictable and random records from your table. Every single time you run the query you might get a different set of 5,000,000 records (conceptually, at least).
Therefore, it doesn't make sense to ask about how to get a second result-set "starting with n - 5000000 and ending at the final row". There is no n, and there is no final row. The choice of returned records was not deterministic, and the DBMS does not remember such choices of past queries. The only conceivable way such information could be incorporated into a subsequent query would be to explicitly include it in the SQL, such as by using a not in condition on an id column and embedding id values from the first query as literals, or doing some kind of negative join, again, involving the embedding of id values as literals. But obviously that's unreasonable.
There are two possible solutions here.
1: order by with limit and offset
Take a look at the PostgreSQL documentation on limit and offset. First, just to reinforce the point about lack of order, take note of the following paragraphs:
When using LIMIT, it is important to use an ORDER BY clause that constrains the result rows into a unique order. Otherwise you will get an unpredictable subset of the query's rows. You might be asking for the tenth through twentieth rows, but tenth through twentieth in what ordering? The ordering is unknown, unless you specified ORDER BY.
The query optimizer takes LIMIT into account when generating query plans, so you are very likely to get different plans (yielding different row orders) depending on what you give for LIMIT and OFFSET. Thus, using different LIMIT/OFFSET values to select different subsets of a query result will give inconsistent results unless you enforce a predictable result ordering with ORDER BY. This is not a bug; it is an inherent consequence of the fact that SQL does not promise to deliver the results of a query in any particular order unless ORDER BY is used to constrain the order.
Now, this solution requires that you specify an order by clause that fully orders the result-set. An order by clause that only partially orders the result-set will not be enough, since it will still leave room for some unpredictability and randomness.
Once you have the order by clause, you can then repeat the query with the same limit value and increasing offset values.
Something like this:
select * from table1 order by id1, id2, ... limit 5000000 offset 0;
select * from table1 order by id1, id2, ... limit 5000000 offset 5000000;
select * from table1 order by id1, id2, ... limit 5000000 offset 10000000;
...
2: synthesize a numbering column and filter on it
It is possible to add a column to the select clause which will provide a full order for the result-set. By wrapping this SQL in a subquery, you can then filter on the new column and thereby achieve your own pagination of the data. In fact, this solution is potentially slightly more powerful, since you could theoretically select discontinuous subsets of records, although I've never seen anyone actually do that.
To compute the ordering column, you can use the row_number() partition function.
Importantly, you will still have to specify id columns by which to order the partition. This is unavoidable under any conceivable solution; there always must be some deterministic, predictable record order to guide stateless paging through data.
Something like this:
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>0 and rn<=5000000;
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>5000000 and rn<=10000000;
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>10000000 and rn<=15000000;
...
Obviously, this solution is more complicated and verbose than the previous one. And the previous solution might allow for performance optimizations not possible under the more manual approach of partitioning and filtering. Hence I would recommend the previous solution.
My above discussion focuses on PostgreSQL, but other DBMSs should provide equivalent features. For example, for SQL Server, see Equivalent of LIMIT and OFFSET for SQL Server?, which shows an example of the synthetic numbering solution, and also indicates that (at least as of SQL Server 2012) you can use OFFSET {offset} ROWS and FETCH NEXT {limit} ROWS ONLY to achieve limit/offset functionality.

SELECT DISTINCT Inside WHERE IN clause performance

I have a performance question about the following code...
SELECT*FROM GCL_Loans WHERE Loan_ID IN
(
SELECT Loan_ID FROM GCL_Loan_Items
)
GCL_Loans has a list of loans with basic infomation
CCL_Loan_Items has information about a specific item in a loan. There can be duplicate Loan_ID's in GCL_Loan_Items
Can anyone explain why this query would be faster or slower than the one above?
SELECT*FROM GCL_Loans WHERE Loan_ID IN
(
SELECT DISTINCT Loan_ID FROM GCL_Loan_Items
)
The "DISTINCT" version is probably faster, because the IN clause will have a smaller data set to search to determine if any given GCL_Loans.Loan_ID is in the set. Without the DISTINCT, the data set will be larger.
There's a reasonably good argument to be made that the query optimizer will automatically recognize the IN test is a set-wise, not a list-wise test and do the DISTINCT during auto-indexing ... but I've seen that fail before.
Note that subselects can be a fail here too, because some databases (mysql) will execute the subselect for each element in the primary select.
The plan and performance of both is equal
Because by selecting DISTINCT there is less criteria in the SUBQuery (IN).
My understanding is SQL will run the subquery first to generate the list of items that are to be included in the IN.

What is the order of execution for this SQL statement

I have below SQL Query :
SELECT TOP 5 C.CustomerID,C.CustomerName,C.CustomerSalary
FROM Customer C
WHERE C.CustomerSalary > 10000
ORDER BY C.CustomerSalary DESC
What will be execution order of the following with proper explanation ?
TOP Clause
WHERE Clause
ORDER BY Clause
Check out the documentation for the SELECT statement, in particular this section:
Logical Processing Order of the SELECT statement
The following steps show the logical processing order, or binding
order, for a SELECT statement. This order determines when the objects
defined in one step are made available to the clauses in subsequent
steps. For example, if the query processor can bind to (access) the
tables or views defined in the FROM clause, these objects and their
columns are made available to all subsequent steps. Conversely,
because the SELECT clause is step 8, any column aliases or derived
columns defined in that clause cannot be referenced by preceding
clauses. However, they can be referenced by subsequent clauses such as
the ORDER BY clause. Note that the actual physical execution of the
statement is determined by the query processor and the order may vary
from this list.
which gives the following order:
FROM
ON
JOIN
WHERE
GROUP BY
WITH CUBE or WITH ROLLUP
HAVING
SELECT
DISTINCT
ORDER BY
TOP
WHERE
ORDER BY
TOP
Here is a good article about that: http://blog.sqlauthority.com/2009/04/06/sql-server-logical-query-processing-phases-order-of-statement-execution/
Simply remember this phrase:-
Fred Jones' Weird Grave Has Several Dull Owls
Take the first letter of each word, and you get this:-
FROM
(ON)
JOIN
WHERE
GROUP BY
(WITH CUBE or WITH ROLLUP)
HAVING
SELECT
DISTINCT
ORDER BY
TOP
Hope that helps.
This is exact execution order, with your case.
1-FROM
2-WHERE
3-SELECT
4-ORDER BY
5-TOP
TOP, WHERE, and ORDER BY are not "executed" - they simply describe the desired result and the database query optimizer determines (hopefully) the best plan for the actual execution. The separation between "declaring the desired result" and how it is physically achieved is what makes SQL a "declarative" language.
Assuming there is an index on CustomerSalary, and the table is not clustered, your query will likely be executed as an index seek + table heap access, as illustrated in this SQL Fiddle (click on View Execution Plan at the bottom):
As you can see, first the correct CustomerSalary value is found through the Index Seek, then the row that value belongs to is retrieved from the table heap through RID Lookup (Row ID Lookup). The Top is just for show here (and has 0% cost), as is the Nested Loops for that matter - the starting index seek will return (at most) one row in any case. The whole query is rather efficient and will likely cost only a few I/O operations.
If the table is clustered, you'll likely have another index seek instead of the table heap access, as illustrated in this SQL Fiddle (note the lack of NONCLUSTERED keyword in the DDL SQL):
But beware: I was lucky this time to get the "right" execution plan. The query optimizer might have chosen a full table scan, which is sometimes actually faster on small tables. When analyzing query plans, always try to do that on realistic amounts of data!
Visit https://msdn.microsoft.com/en-us/library/ms189499.aspx for a better explanation.
The following steps show the logical processing order, or binding order, for a SELECT statement. This order determines when the objects defined in one step are made available to the clauses in subsequent steps. For example, if the query processor can bind to (access) the tables or views defined in the FROM clause, these objects and their columns are made available to all subsequent steps. Conversely, because the SELECT clause is step 8, any column aliases or derived columns defined in that clause cannot be referenced by preceding clauses. However, they can be referenced by subsequent clauses such as the ORDER BY clause. Note that the actual physical execution of the statement is determined by the query processor and the order may vary from this list.
FROM
ON
JOIN
WHERE
GROUP BY
WITH CUBE or WITH ROLLUP
HAVING
SELECT
DISTINCT
ORDER BY
TOP
My $0,02 here.
There's two different concepts in action here: the logical execution order and the plan of query execution. An other was to see it is who answers the following questions:
How MSSQL understood my SQL Query?
What it'll do to execute it in the best possible way given the current schema and data?
The first question is answered by the logical execution order. Brian's answer show what it is. It's the way SQL understood your command: "FROM Customer table (aliased as C) consider only the rows WHERE the C.CustomerSalary > 10000, ORDER them BY C.CustomerSalary in descendent order and SELECT the columns listed for the TOP 5 rows". The resultset will obey that meaning
The second question's answer is the query execution plan - and it depends on your schema (table definitions, selectivity of data, quantity of rows in the customer table, defined indexes, etc) since is heavily dependant of SQL Server optimizer internal workings.
Here is the complete sequence for sql server :
1. FROM
2. ON
3. JOIN
4. WHERE
5. GROUP BY
6. WITH CUBE or WITH ROLLUP
7. HAVING
8. SELECT
9. DISTINCT
10. ORDER BY
11. TOP
So from the above list, you can easily understand the execution sequence of TOP, WHERE and ORDER BY which is :
1. WHERE
2. ORDER BY
3. TOP
Get more information about it from Microsoft

In SQL, what’s the difference between count(*) and count('x')? [duplicate]

This question already has answers here:
In SQL, what's the difference between count(column) and count(*)?
(12 answers)
Closed 9 years ago.
I have the following code:
SELECT <column>, count(*)
FROM <table>
GROUP BY <column> HAVING COUNT(*) > 1;
Is there any difference to the results or performance if I replace the COUNT(*) with COUNT('x')?
(This question is related to a previous one)
To say that SELECT COUNT(*) vs COUNT(1) results in your DBMS returning "columns" is pure bunk. That may have been the case long, long ago but any self-respecting query optimizer will choose some fast method to count the rows in the table - there is NO performance difference between SELECT COUNT(*), COUNT(1), COUNT('this is a silly conversation')
Moreover, SELECT(1) vs SELECT(*) will NOT have any difference in INDEX usage -- most DBMS will actually optimize SELECT( n ) into SELECT(*) anyway. See the ASK TOM: Oracle has been optimizing SELECT(n) into SELECT(*) for the better part of a decade, if not longer:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1156151916789
problem is in count(col) to count()
conversion
**03/23/00 05:46 pm *** one workaround is to set event 10122 to
turn off count(col) ->count()
optimization. Another work around is
to change the count(col) to count(),
it means the same, when the col has a
NOT NULL constraint. The bug number is
1215372.
One thing to note - if you are using COUNT(col) (don't!) and col is marked NULL, then it will actually have to count the number of occurrences in the table (either via index scan, histogram, etc. if they exist, or a full table scan otherwise).
Bottom line: if what you want is the count of rows in a table, use COUNT(*)
The major performance difference is that COUNT(*) can be satisfied by examining the primary key on the table.
i.e. in the simple case below, the query will return immediately, without needing to examine any rows.
select count(*) from table
I'm not sure if the query optimizer in SQL Server will do so, but in the example above, if the column you are grouping on has an index the server should be able to satisfy the query without hitting the actual table at all.
To clarify: this answer refers specifically to SQL Server. I don't know how other DBMS products handle this.
This question is slightly different that the other referenced. In the referenced question, it was asked what the difference was when using count(*) and count(SomeColumnName), and SQLMenace's answer was spot on.
To address this question, essentially there is no difference in the result. Both count(*) and count('x') and say count(1) will return the same number. The difference is that when using " * " just like in a SELECT all columns are returned, then counted. When a constant is used (e.g. 'x' or 1) then a row with one column is returned and then counted. The performance difference would be seen when " * " returns many columns.
Update: The above statement about performance is probably not quite right as discussed in other answers, but does apply to subselect queries when using EXISTS and NOT EXISTS
MySQL: According to the MySQL website, COUNT(*) is faster for single table queries when using MyISAM:
http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html#function_count
I'm guessing with a having clause with a count in it may change things.