If a window is provided multiple times in the same query, how is it evaluated? Does the query parser check if one window is the same as another or easily 'derived' from another. For example in the following:
SELECT
MAX(val) OVER (PARTITION BY product_id ORDER BY date ROWS BETWEEN 2 PRECEDING AND CURRENT ROW) one,
MAX(val) OVER (PARTITION BY product_id ORDER BY date ROWS BETWEEN 3 PRECEDING AND CURRENT ROW) two,
MAX(val) OVER (PARTITION BY product_id ORDER BY date ROWS BETWEEN 4 PRECEDING AND CURRENT ROW) three
FROM
table
How do database engines 'optimize' this query, if they do at all? Does it involve calculated a single window and altering that for other calculations, or does this create three distinct windows? Where might I be able to find more information on how/when the window functions are evaluated (any backend is fine -- oracle, mysql, sqlserver, postgres)?
This depends on the database. That said, the partition by and order by incur overhead for processing data. There is a good chance that the database will not need to re-do that work because the window frame specification ("rows between") differs slightly.
Of course, different partition by and order by conditions would mean that the data could not be re-used and would need to be reprocessed.
So, given the specification you have with slight differences, there is an opportunity for a good optimizers to re-use intermediate results. However, it is easy to modify the clauses so they cannot be re-used.
Related
I'm trying to understand, how window function works internally.
ID,Amt
A,1
B,2
C,3
D,4
E,5
If I run this, will give sum of all amount in total column against every record.
Select ID, SUM (AMT) OVER () total from table
but when I run this, it will give me cumulative sum
Select ID, SUM (AMT) OVER (order by ID) total from table
Trying to understand what is happening when its OVER() and OVER(order by ID)
What I've understood is when no partition is defined in OVER, it considers everything as single partition. But not able to understand when we add order by Id within over(), how come it starts doing cumulative sum ?
Can anyone share what's happening behind the scenes for this ?
That is an interesting case, based on the documentation here is the explanation and example.
If PARTITION BY is not specified, the function treats all rows of the
query result set as a single partition. Function will be applied on
all rows in the partition if you don't specify ORDER BY clause.
So if you specifiey ORDER BY then
If it is specified, and a ROWS/RANGE is not specified, then default
RANGE UNBOUNDED PRECEDING AND CURRENT ROW is used as default for
window frame by the functions that can accept optional ROWS/RANGE
specification (for example min or max).
So technically these two commands are the same:
SELECT ID, SUM(AMT) OVER (ORDER BY ID) total FROM table
SELECT ID, SUM(AMT) OVER (ORDER BY ID RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) total FROM table
More about you can read in the documentation:https://learn.microsoft.com/en-us/sql/t-sql/queries/select-over-clause-transact-sql?view=sql-server-ver15
This is not related to Oracle itself, but it's part of the SQL Standard and behaves the same way in many databases including Oracle, DB2, PostgreSQL, SQL Server, MySQL, MariaDB, H2, etc.
By definition, when you include the ORDER BY clause the engine will produce "running values" (cumulative aggregation) inside each partition; without the ORDER BY clause it produces the same, single value that aggregates the whole partition.
Now, the partition itself is mainly defined by the PARTITION BY clause. In its absence, the whole result set is considered as a single partition.
Finally, as a more advanced topic, the partition can be further tweaked using a "frame" clause (ROWS and RANGE) and by a "frame exclusion" clause (EXCLUDE).
everyone. First time I've posted here. I looked for some sticky threads that might tell me some "HEY DO THIS BEFORE YOU POST FOR THE FIRST TIME" info, but I may have missed it. So, here's the question:
I'm working on building out a dataset for an analysis, and I'm trying to fill in some null rows. I don't know if it's the best way, but I think I need to LAG OVER PARTITION BY this dataset. Here's an example of the table:
My goal would be to have all of the null values in the BidEnd field filled with the most recent cell above it. So, rows 1-4 would all be filled with 2020-01-03. The end goal is to be able to label all the rows as valid or not. If the bid start occurred after the bid end, then it would not be valid. The dataset will need to do this with all customers and then with all bid_ids grouped under that customer.
I'd much prefer to use the code and an actual example, but I am not allowed to share that information, so I've tried to recreate the scenario as best as possible. Sorry if it's confusing.
In standard SQL, you would use lag(ignore nulls):
select t.*,
lag(bidend ignore nulls) over (partition by customer2 order by row)
from t;
Although standard SQL, not all databases support the ignore nulls option on lag(). That is why tagging your database is important.
Actually, it looks like you have one value per customer2/bid_id pair. If that is true, you can use max():
select t.*,
max(bidend) over (partition by customer2, bid_id)
from t;
Through R I connect to a remotely held database. The issue I have is my hardware isn't so great and the dataset contains tens of millions of rows with about 10 columns per table. When I run the below code, at the df step, I get a "Not enough RAM" error from R:
library(DatabaseConnector)
conn <- connect(connectionDetails)
df <- querySql(conn,"SELECT * FROM Table1")
What I thought about doing was splitting the tables into two parts any filter/analyse/combine as needed going forward. I think because I use the conn JDBC conection I have to use SQL syntax to make it work. With SQL, I start with the below code:
df <- querySql(conn,"SELECT TOP 5000000 FROM Table1")
And then where I get stuck is how do I create a second dataframe starting with n - 5000000 rows and ending at the final row, retrieved from Table1.
I'm open to suggestions but I think there are two potential answers to this question. The first is to work within the querySql to get it working. The second is to use an R function other than querySql (no idea what this would look like). I'm limited to R due to work environment.
The SQL statement
SELECT TOP 5000000 * from Table1
is not doing what you think it's doing.
Relational tables are conceptually unordered.
A relation is defined as a set of n-tuples. In both mathematics and the relational database model, a set is an unordered collection of unique, non-duplicated items, although some DBMSs impose an order to their data.
Selecting from a table produces a result-set. Result-sets are also conceptually unordered unless and until you explicitly specify an order for them, which is generally done using an order by clause.
When you use a top (or limit, depending on the DBMS) clause to reduce the number of records to be returned by a query (let's call these the "returned records") below the number of records that could be returned by that query (let's call these the "selected records") and if you have not specified an order by clause, then it is conceptually unpredictable and random which of the selected records will be chosen as the returned records.
Since you have not specified an order by clause in your query, you are effectively getting 5,000,000 unpredictable and random records from your table. Every single time you run the query you might get a different set of 5,000,000 records (conceptually, at least).
Therefore, it doesn't make sense to ask about how to get a second result-set "starting with n - 5000000 and ending at the final row". There is no n, and there is no final row. The choice of returned records was not deterministic, and the DBMS does not remember such choices of past queries. The only conceivable way such information could be incorporated into a subsequent query would be to explicitly include it in the SQL, such as by using a not in condition on an id column and embedding id values from the first query as literals, or doing some kind of negative join, again, involving the embedding of id values as literals. But obviously that's unreasonable.
There are two possible solutions here.
1: order by with limit and offset
Take a look at the PostgreSQL documentation on limit and offset. First, just to reinforce the point about lack of order, take note of the following paragraphs:
When using LIMIT, it is important to use an ORDER BY clause that constrains the result rows into a unique order. Otherwise you will get an unpredictable subset of the query's rows. You might be asking for the tenth through twentieth rows, but tenth through twentieth in what ordering? The ordering is unknown, unless you specified ORDER BY.
The query optimizer takes LIMIT into account when generating query plans, so you are very likely to get different plans (yielding different row orders) depending on what you give for LIMIT and OFFSET. Thus, using different LIMIT/OFFSET values to select different subsets of a query result will give inconsistent results unless you enforce a predictable result ordering with ORDER BY. This is not a bug; it is an inherent consequence of the fact that SQL does not promise to deliver the results of a query in any particular order unless ORDER BY is used to constrain the order.
Now, this solution requires that you specify an order by clause that fully orders the result-set. An order by clause that only partially orders the result-set will not be enough, since it will still leave room for some unpredictability and randomness.
Once you have the order by clause, you can then repeat the query with the same limit value and increasing offset values.
Something like this:
select * from table1 order by id1, id2, ... limit 5000000 offset 0;
select * from table1 order by id1, id2, ... limit 5000000 offset 5000000;
select * from table1 order by id1, id2, ... limit 5000000 offset 10000000;
...
2: synthesize a numbering column and filter on it
It is possible to add a column to the select clause which will provide a full order for the result-set. By wrapping this SQL in a subquery, you can then filter on the new column and thereby achieve your own pagination of the data. In fact, this solution is potentially slightly more powerful, since you could theoretically select discontinuous subsets of records, although I've never seen anyone actually do that.
To compute the ordering column, you can use the row_number() partition function.
Importantly, you will still have to specify id columns by which to order the partition. This is unavoidable under any conceivable solution; there always must be some deterministic, predictable record order to guide stateless paging through data.
Something like this:
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>0 and rn<=5000000;
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>5000000 and rn<=10000000;
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>10000000 and rn<=15000000;
...
Obviously, this solution is more complicated and verbose than the previous one. And the previous solution might allow for performance optimizations not possible under the more manual approach of partitioning and filtering. Hence I would recommend the previous solution.
My above discussion focuses on PostgreSQL, but other DBMSs should provide equivalent features. For example, for SQL Server, see Equivalent of LIMIT and OFFSET for SQL Server?, which shows an example of the synthetic numbering solution, and also indicates that (at least as of SQL Server 2012) you can use OFFSET {offset} ROWS and FETCH NEXT {limit} ROWS ONLY to achieve limit/offset functionality.
I have a database with a column called "level", which stores integers that increment from 1 upwards.
I'd like to run a select statement (which will also have various other conditions) to retrieve those rows that are the first, and last, of each "level", i.e. the boundaries of each level. But I have tens of millions of records, so would like to do this in the most efficient way possible.
Any suggestions?
I'll call the variable that determines first and last something. I suppose it is a timestamp, but you didn't tell us.
If you need one column from the row, then
SELECT level, MAX(something) as maxie, MIN(something) as minnie
FROM mytable
GROUP BY level;
If you want the whole row, make sure to use a database with windowing functions
SELECT DISTINCT first_value(mytable) over www, last_value(mytable) over www
FROM mytable
WINDOW www as (partition by level order by level, something
RANGE BETWEEN unbounded preceding AND unbounded following);
If these are too slow, there might be some gimmicks based on clever indexing of level and something. I'm still learning windowing, which is new to Postgres 9 but has been in Oracle for years. (It isn't in MySQL; there you probably need to get PKs of the extrema and do a join.)
I want get n-th to m-th records in a table, what's best choice in 2 below solutions:
Solution 1:
SELECT * FROM Table WHERE ID >= n AND ID <= m
Solution 2:
SELECT * FROM
(SELECT *,
ROW_NUMBER() OVER (ORDER BY ID) AS row
FROM Table
)a
WHERE row >= n AND row <= m
As other already pointed out, the queries return different results and are comparing apples to oranges.
But the underlying question remains: which is faster: keyset driven paging or rownumber driven paging?
Keyset Paging
Keyset driven paging relies on remembering the top and bottom keys of the last displayed page, and requesting the next or previous set of rows, based on the top/last keyset:
Next page:
select top (<pagesize>) ...
from <table>
where key > #last_key_on_current_page
order by key;
Previous page:
select top (<pagesize>)
from <table>
where key < #first_key_on_current_page
order by key desc;
This approach has two main advantages over the ROW_NUMBER approach, or over the equivalent LIMIT approach of MySQL:
is correct: unlike the row number based approach it correctly handles new entries and deleted entries. Last row of Page 4 does not show up as first row of Page 5 just because row 23 on Page 2 was deleted in the meantime. Nor do rows mysteriously vanish between pages. These anomalies are common with the row_number based approach, but the key set based solution does a much better job at avoiding them.
is fast: all operations can be solved with a fast row positioning followed by a range scan in the desired direction
However, this approach is difficult to implement, hard to understand by the average programmer and not supported by the tools.
Row Number Driven
This is the common approach introduced with Linq queries:
select ...
from (
select ..., row_number() over (...) as rn
from table)
where rn between #firstRow and #lastRow;
(or a similar query using TOP)
This approach is easy to implement and is supported by tools (specifically by Linq .Limit and .Take operators). But this approach is guaranteed to scan the index in order to count the rows. This approach works usually very fast for page 1 and gradually slows down as the an one goes to higher and higher page numbers.
As a bonus, with this solution is very easy to change the sort order (simply change the OVER clause).
Overall, given the ease of the ROW_NUMBER() based solutions, the support they have from Linq, the simplicity to use arbitrary orders for moderate data sets the ROW_NUMBER based solutions are adequate. For large and very large data sets, the ROW_NUMBER() can occur serious performance issues.
One other thing to consider is that often times there is a definite pattern of access. Often the first few pages are hot and pages after 10 are basically never viewed (eg. most recent posts). In this case, the penalty that occurs with ROW_NUMBER() for visiting bottom pages (display pages for which a large number of rows have to be counted to get the starting result row) may be well ignored.
And finally, the keyset pagination is great for dictionary navigation, which ROW_NUMBER() cannot accommodate easily. Dictionary navigation is where instead of using page number, users can navigate to certain anchors, like alphabet letters. Typical example being a contact Rolodex like sidebar, you click on M and you navigate to the first customer name that starts with M.
The 2nd answer is your best choice. It takes into account the fact that you could have holes in your ID column. I'd rewrite it as a CTE though instead of a subquery...
;WITH MyCTE AS
(SELECT *,
ROW_NUMBER() OVER (ORDER BY ID) AS row
FROM Table)
SELECT *
FROM MyCTE
WHERE row >= #start
AND row <= #end
They are different queries.
Assuming ID is a surrogate key, it may have gaps. ROW_NUMBER will be contiguous.
If you can guarantee you have no gaps in the data, then the 1st one because I'd hope it's indexed,. The 2nd one is more "correct" though.