Is there a performance difference in using a GROUP BY with MAX() as the aggregate vs ROW_NUMBER over partition by? - sql

Is there a performance difference between the following 2 queries, and if so, then which one is better?:
select
q.id,
q.name
from(
select id, name, row_number over (partition by name order by id desc) as row_num
from table
) q
where q.row_num = 1
versus
select
max(id) ,
name
from table
group by name
(The result set should be the same)
This is assuming that no indexes are set.
UPDATE: I tested this, and the group by was faster.

I had a table of about 4.5M rows, and I wrote both a MAX with GROUP BY as well as a ROW_NUMBER solution and tested them both. The MAX requires two clustered scans of the table, one to aggregate, and a second to join to the rest of the columns whereas ROW_NUMBER only needed one. (Obviously one or both of these could be indexed to minimize IO, but the point is that GROUP BY requires two index scans.)
According to the optimizer, in my case the ROW_NUMBER is about 60% more efficient according to the subtree cost. And according to statistics IO, about 20% less CPU time. However, in real elapsed time, the ROW_NUMBER solution takes about 80% more real time. So the GROUP BY wins in my case.
This seems to match the other answers here.

The group by should be faster. The row number has to assign a row to all rows in the table. It does this before filtering out the ones it doesn't want.
The second query is, by far, the better construct. In the first, you have to be sure that the columns in the partition clause match the columns that you want. More importantly, "group by" is a well-understood construct in SQL. I would also speculate that the group by might make better use of indexes, but that is speculation.

I'd use the group by name.
Not much in it when the index is name, id DESC (Plan 1)
but if the index is declared as name, id ASC (Plan 2) then in 2008 I see the ROW_NUMBER version is unable to use this index and gets a sort operation whereas the GROUP BY is able to use a backwards index scan to avoid this.
You'd need to check the plans on your version of SQL Server and with your data and indexes to be sure.

Related

If you do a simple SELECT-WHERE on a CTE that is already sorted, are your results guaranteed to still be in that same order, just filtered?

Wondering about expected/deterministic ordering output from Oracle 11g for queries based on sorted CTEs.
Consider this (extremely-oversimplified for the sake of the) example SQL query. Again, note how the CTE has an ORDER BY clause in it.
WITH SortedArticles as (
SELECT. *
FROM Articles
ORDER BY DatePublished
)
SELECT *
FROM SortedArticles
WHERE Author = 'Joe';
Can it be assumed that the outputted rows are guaranteed to be in the same order as the CTE, or do I have to re-sort them a second time?
Again, this is an extremely over-simplified example but it contains the important parts of what I'm asking. They are...
The CTE is sorted
The final SELECT statement selects only against the CTE, nothing else (no joins, etc.), and
The final SELECT statement only specifies a WHERE clause. It is purely a filtering statement.
The short answer is no. The only way to guarantee ordering is with an ORDER BY clause on your outer query. But there is no need to sort the results in the CTE in that situation.
However, if the sort expression is complex, and you need sorting in the derived CTEs (e.g. because of using OFFSET/FETCH or ROWNUM), you could simplify the subsequent sorting by adding a row number field to the original CTE based on its sort criteria and then just sorting the derived CTEs by that row number. For your example:
WITH SortedArticles as (
SELECT *,
ROW_NUMBER() OVER (ORDER BY DatePublished) AS rn
FROM Articles
)
SELECT *
FROM SortedArticles
WHERE Author = 'Joe'
ORDER BY rn
No, the results are not guaranteed to be in the same order as in the subquery. Never was, never will be. You may observe a certain behaviour, especially if the CTE is materialized, which you can try to influence with optimizer hints like /*+ MATERIALIZE */ and /*+ INLINE */. However, the behaviour of the query optimizer depends also on data volume, IO v cpu speed, and most importantly on the database version. For instance, Oracle 12.2 introduces a feature called "In-Memory Cursor Duration Temp Table" that tries to speed up queries like yours, without preserving the order in the subquery.
I'd go along with #Nick's suggestion of adding a row number field in the subquery.

Splitting large table into 2 dataframes via JDBC connection in RStudio

Through R I connect to a remotely held database. The issue I have is my hardware isn't so great and the dataset contains tens of millions of rows with about 10 columns per table. When I run the below code, at the df step, I get a "Not enough RAM" error from R:
library(DatabaseConnector)
conn <- connect(connectionDetails)
df <- querySql(conn,"SELECT * FROM Table1")
What I thought about doing was splitting the tables into two parts any filter/analyse/combine as needed going forward. I think because I use the conn JDBC conection I have to use SQL syntax to make it work. With SQL, I start with the below code:
df <- querySql(conn,"SELECT TOP 5000000 FROM Table1")
And then where I get stuck is how do I create a second dataframe starting with n - 5000000 rows and ending at the final row, retrieved from Table1.
I'm open to suggestions but I think there are two potential answers to this question. The first is to work within the querySql to get it working. The second is to use an R function other than querySql (no idea what this would look like). I'm limited to R due to work environment.
The SQL statement
SELECT TOP 5000000 * from Table1
is not doing what you think it's doing.
Relational tables are conceptually unordered.
A relation is defined as a set of n-tuples. In both mathematics and the relational database model, a set is an unordered collection of unique, non-duplicated items, although some DBMSs impose an order to their data.
Selecting from a table produces a result-set. Result-sets are also conceptually unordered unless and until you explicitly specify an order for them, which is generally done using an order by clause.
When you use a top (or limit, depending on the DBMS) clause to reduce the number of records to be returned by a query (let's call these the "returned records") below the number of records that could be returned by that query (let's call these the "selected records") and if you have not specified an order by clause, then it is conceptually unpredictable and random which of the selected records will be chosen as the returned records.
Since you have not specified an order by clause in your query, you are effectively getting 5,000,000 unpredictable and random records from your table. Every single time you run the query you might get a different set of 5,000,000 records (conceptually, at least).
Therefore, it doesn't make sense to ask about how to get a second result-set "starting with n - 5000000 and ending at the final row". There is no n, and there is no final row. The choice of returned records was not deterministic, and the DBMS does not remember such choices of past queries. The only conceivable way such information could be incorporated into a subsequent query would be to explicitly include it in the SQL, such as by using a not in condition on an id column and embedding id values from the first query as literals, or doing some kind of negative join, again, involving the embedding of id values as literals. But obviously that's unreasonable.
There are two possible solutions here.
1: order by with limit and offset
Take a look at the PostgreSQL documentation on limit and offset. First, just to reinforce the point about lack of order, take note of the following paragraphs:
When using LIMIT, it is important to use an ORDER BY clause that constrains the result rows into a unique order. Otherwise you will get an unpredictable subset of the query's rows. You might be asking for the tenth through twentieth rows, but tenth through twentieth in what ordering? The ordering is unknown, unless you specified ORDER BY.
The query optimizer takes LIMIT into account when generating query plans, so you are very likely to get different plans (yielding different row orders) depending on what you give for LIMIT and OFFSET. Thus, using different LIMIT/OFFSET values to select different subsets of a query result will give inconsistent results unless you enforce a predictable result ordering with ORDER BY. This is not a bug; it is an inherent consequence of the fact that SQL does not promise to deliver the results of a query in any particular order unless ORDER BY is used to constrain the order.
Now, this solution requires that you specify an order by clause that fully orders the result-set. An order by clause that only partially orders the result-set will not be enough, since it will still leave room for some unpredictability and randomness.
Once you have the order by clause, you can then repeat the query with the same limit value and increasing offset values.
Something like this:
select * from table1 order by id1, id2, ... limit 5000000 offset 0;
select * from table1 order by id1, id2, ... limit 5000000 offset 5000000;
select * from table1 order by id1, id2, ... limit 5000000 offset 10000000;
...
2: synthesize a numbering column and filter on it
It is possible to add a column to the select clause which will provide a full order for the result-set. By wrapping this SQL in a subquery, you can then filter on the new column and thereby achieve your own pagination of the data. In fact, this solution is potentially slightly more powerful, since you could theoretically select discontinuous subsets of records, although I've never seen anyone actually do that.
To compute the ordering column, you can use the row_number() partition function.
Importantly, you will still have to specify id columns by which to order the partition. This is unavoidable under any conceivable solution; there always must be some deterministic, predictable record order to guide stateless paging through data.
Something like this:
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>0 and rn<=5000000;
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>5000000 and rn<=10000000;
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>10000000 and rn<=15000000;
...
Obviously, this solution is more complicated and verbose than the previous one. And the previous solution might allow for performance optimizations not possible under the more manual approach of partitioning and filtering. Hence I would recommend the previous solution.
My above discussion focuses on PostgreSQL, but other DBMSs should provide equivalent features. For example, for SQL Server, see Equivalent of LIMIT and OFFSET for SQL Server?, which shows an example of the synthetic numbering solution, and also indicates that (at least as of SQL Server 2012) you can use OFFSET {offset} ROWS and FETCH NEXT {limit} ROWS ONLY to achieve limit/offset functionality.

hsqldb: do I have to ORDER BY to ensure consistent selection order?

I have a table containing samples. The inserted samples are already naturally ordered by the timestamp.
My question is this - when I SELECT from the table do I have to use the ORDER BY clause to ensure the fetched samples are ordered by the timestamp?
Rows in a relation database are NOT sorted (Picture them as balls in a basket. Which one is the "first"?)
The only way (really, the only) to get a consistently sorted result is to use ORDER BY.
You cannot rely on side effects of joins, group by. UNION, index retrieval or similar operators. They will never guarantee an order. The DBMS is free to choose to return the rows in whatever order it thinks is the fastest unless you specify an ORDER BY.
If an HSQLDB table T has a column C as primary key, or has any index on that column,
SELECT FROM T ORDER BY C
will return ordered rows without extra ORDER BY processing.
If there is a condition on the select, which uses an index on a different column, you can still force the use of the index for ORDER BY:
SELECT FROM T WHERE <some condition> ORDER BY C USING INDEX
But in this case, you should only use USING INDEX if most of the rows of the table will be returned. Otherwise it is better to leave the engine use the other index to reduce the table scan time.
USING INDEX is ignored if there is no index to use for ORDER BY.

creating functional index with a order by clause in oracle

I Need to create a order by index on a table
Student (
roll_No,
name,
stream,
percentage,
class_rank,
overall_rank )
I wish to query something like
SELECT *
FROM student
WHERE stream = 'science'
The expected result would be the students arranged in descending order of their rank. A requirement is that I can not specify order by clause in the query itself.
This should be achieved by an index on (stream , order by class_rank desc). Is this achievable in oracle?
If you do not specify an ORDER BY clause, Oracle does not guarantee the order in which rows are returned. The requirement does not make sense.
You might get lucky and find that Oracle chooses a query plan that happens to return the rows in the order you want. But that would be a matter of luck-- Oracle could choose a different query plan tomorrow or an Oracle version upgrade may create the results to change. For example, folks that relied for years on the GROUP BY clause ordering the results as a side effect were distressed when a new version of Oracle added a more efficient grouping algorithm that didn't have the side effect of ordering the results.
I got it working
CREATE INDEX stream_rank_idx ON Student (stream,class_rank desc);
so when ever i fire a select * from student where stream =? query the above index will be used and it will return me the desired result.
and i think its almost always safe if i am not upgrading oracle. and even after upgrade there is very low probabilty that oracle will change the way indexes are picked.

The faster of two SQL queries, sort and select top 1, or select MAX

Which one is faster of the following two queries?
1
SELECT TOP 1 order_date
FROM orders WITH (NOLOCK)
WHERE customer_id = 9999999
ORDER BY order_date DESC
2
SELECT MAX(order_date)
FROM orders WITH (NOLOCK)
WHERE customer_id = 9999999
With an index on order_date, they are of same performance.
Without an index, MAX is a little bit faster, since it will use Stream Aggregation rather than Top N Sort.
ORDER BY is almost always slowest. The table's data must be sorted.
Aggregate functions shouldn't slow things down as much as a sort.
However, some aggregate functions use a sort as part of their implementation. So a particular set of tables in a particular database product must be tested experimentally to see which is faster -- for that set of data.
I would say the first, because the second requires it to go through an aggregate function.
However, as marc_s said, test it.
You can't answer that without knowing at least something about your indexes (have you got one on customer_id, on order_date?), the amount of data in the table, the state of your statistics etc.etc.
it depends on how big your table is. On the second query, i guess there's no need for "TOP 1," since MAX only returns one value. If I were you, I would use the second query.