Oracle slow RANK function - sql

My application uses views that must be kept generic (no filters), and which include analytic functions RANK and DENSE_RANK. For example I have a view MYVIEW:
SELECT
RANK() OVER (PARTITION BY FIELD1 ORDER BY FIELD2) RANK,
FIELD2,
FIELD3
FROM TABLE1;
My application then applies the necessary filters at runtime i.e.
SELECT * FROM MYVIEW WHERE FIELD3 IN ('a','b','c');
My query is very fast without the RANK function, but painfully slow (2+ minutes) with it (I get the right results, just slow). The underlying table has 250,000+ rows and I have no control over its design. I cannot partition it any further. So is it slow because it creates partitions for every unique entry in FIELD1 every time the view is called? Any other way to avoid that? Any suggestions on how to make this faster?

As was mentioned in the comments, with your analytic function in the view, Oracle can't take any shortcuts (predicate pushing) because
in your view, you have created an agreement with Oracle: whenever the view is accessed the RANK should be based on all of the rows in the table - no WHERE clause was specified
when querying a view, an "external" WHERE clause should never affect how a row generated by the view looks, but only whether or not that row is kept or not
analytic functions look at other rows to generate a value so if you change those rows (filtering) you can change the value - pushing a predicate could easily affect the values generated by these functions
if this could happen, your view result could become very inconsistent (just depending on how the optimizer chose to evaluate the query)
So, based on the details you've provided, your query needs to be evaluated like this:
SELECT *
FROM (
SELECT
RANK() OVER (PARTITION BY FIELD1 ORDER BY FIELD2) RANK,
FIELD2,
FIELD3
FROM TABLE1
) myview
WHERE <condition>; -- rankings are not affected by external conditions
and not this:
SELECT * FROM (
SELECT
RANK() OVER (PARTITION BY FIELD1 ORDER BY FIELD2) RANK,
FIELD2,
FIELD3
FROM TABLE1
WHERE FIELD3 IN ('a','b','c') -- ranking is affected by the conditions
)
So, is there a way to make this faster? Maybe.
If the table is partitioned, there's the thought of using parallel query.
Could an index help?
Not in the usual sense. Since there are no conditions in the view itself, it will do a full table scan to consider all of the rows for the rankings and by the time the WHERE clause is applied, it's too late to use an index for filtering.
However, if you had an index that "covered" the query, that is, have an index on just the columns being used (e.g. FIELD1, FIELD2, FIELD3 in that order), an index could be used as a smaller version of the table (instead of FULL TABLE SCAN the plan would show INDEX FAST FULL SCAN.) As a bonus, since it's already sorted, it could be efficient at working out the partitions on FIELD1 and then ordering on FIELD2 within each partition.
Another option would be to make this a materialized view instead, but if your data is changing often, it could be a pain to keep current.
One final thought would be something that is similar to the "poor man's" partitioning used before the days of the partitioning option.
(Sorry I can't find a good link that describes this, but maybe you have heard of it before.)
This is really only an option if:
your partitioning column has a relatively small number of distinct values
those values don't change
you know what partition values you can use to isolate the data on in your query
Oracle is willing to push the predicate when it's safe to do so
Given that Oracle seems adverse to pushing the predicate when analytic functions are involved, I'm not giving this a high probability of success.
If you want more info on that, let me know.

Related

Conditional offset on union select

Consider we have complex union select from dozens of tables with different structure but similar fields meaning:
SELECT a1.abc as field1,
a1.bcd as field2,
a1.date as order_date,
FROM a1_table a1
UNION ALL
SELECT a2.def as field1,
a2.fff as field2,
a2.ts as order_date,
FROM a2_table a2
UNION ALL ...
ORDER BY order_date
Notice also that results in general are sorted by "synthetic" field order_date.
This query gives huge number of rows, and we want to work with pages from this set of rows. Each page is defined by two parameters:
page size
field2 value of last item from previous page
Most important thing that we can not change the way can page is defined. I.e. it is not possible to use row number of date of last item from previous page: only field2 value is acceptable.
Current algorithm of paging is implemented in quite ugly way:
1) query above is wrapped in additional select with row_number() additional column and then wrapped in stored procedure union_wrapper which returns appropriate
table ( field1 ..., field2 character varying),
2) then complex select performed:
RETURN QUERY
with tmp as (
select
rownum, field1, field2 from union_wrapper()
)
SELECT field1, field2
FROM tmp
WHERE rownum > (SELECT rownum
FROM tmp
WHERE field2 = last_field_id
LIMIT 1)
LIMIT page_size
The problem is that we have to build in memory full union-select results in order to later detect row number from which we want to cut new page. This is quite slow and takes unacceptable much time to perform.
Is any way to reconfigure this operations in order to significantly reduce query complexity and increase its speed?
And again: we can not change condition of paging, we can not change structure of the tables. Only way of rows retrieving.
UPD: I also can not use temp tables, because I'm working in read-replica of the database.
You have successfully maneuvered yourself into a tight spot. The query and its ORDER BY expression contradict your paging requirements.
ORDER BY order_date is not a deterministic sort order (there could be multiple rows with the same order_date) - which you need before you do anything else here. And field2 does not seem to be unique either. You need both: Define a deterministic sort order and a unique indicator for page end / start. Ideally, the indicator matches the sort order. Could be (order_date, field2), which both columns defined NOT NULL, and the combination UNIQUE. Your restriction "only field2 value is acceptable" contradicts your query.
That's all before thinking about how to get best performance ...
There are proven solutions with row values and multi-column indexes for paging:
Optimize query with OFFSET on large table
But drawing from a combination of multiple source tables complicates matters. Optimization depends on the details of your setup.
If you can't get the performance you need, your only remaining alternative is to materialize the query results somehow. Temp table, cursor, materialized view - the best tool depends on details of your setup.
Of course, general performance tuning might help, too.

Splitting large table into 2 dataframes via JDBC connection in RStudio

Through R I connect to a remotely held database. The issue I have is my hardware isn't so great and the dataset contains tens of millions of rows with about 10 columns per table. When I run the below code, at the df step, I get a "Not enough RAM" error from R:
library(DatabaseConnector)
conn <- connect(connectionDetails)
df <- querySql(conn,"SELECT * FROM Table1")
What I thought about doing was splitting the tables into two parts any filter/analyse/combine as needed going forward. I think because I use the conn JDBC conection I have to use SQL syntax to make it work. With SQL, I start with the below code:
df <- querySql(conn,"SELECT TOP 5000000 FROM Table1")
And then where I get stuck is how do I create a second dataframe starting with n - 5000000 rows and ending at the final row, retrieved from Table1.
I'm open to suggestions but I think there are two potential answers to this question. The first is to work within the querySql to get it working. The second is to use an R function other than querySql (no idea what this would look like). I'm limited to R due to work environment.
The SQL statement
SELECT TOP 5000000 * from Table1
is not doing what you think it's doing.
Relational tables are conceptually unordered.
A relation is defined as a set of n-tuples. In both mathematics and the relational database model, a set is an unordered collection of unique, non-duplicated items, although some DBMSs impose an order to their data.
Selecting from a table produces a result-set. Result-sets are also conceptually unordered unless and until you explicitly specify an order for them, which is generally done using an order by clause.
When you use a top (or limit, depending on the DBMS) clause to reduce the number of records to be returned by a query (let's call these the "returned records") below the number of records that could be returned by that query (let's call these the "selected records") and if you have not specified an order by clause, then it is conceptually unpredictable and random which of the selected records will be chosen as the returned records.
Since you have not specified an order by clause in your query, you are effectively getting 5,000,000 unpredictable and random records from your table. Every single time you run the query you might get a different set of 5,000,000 records (conceptually, at least).
Therefore, it doesn't make sense to ask about how to get a second result-set "starting with n - 5000000 and ending at the final row". There is no n, and there is no final row. The choice of returned records was not deterministic, and the DBMS does not remember such choices of past queries. The only conceivable way such information could be incorporated into a subsequent query would be to explicitly include it in the SQL, such as by using a not in condition on an id column and embedding id values from the first query as literals, or doing some kind of negative join, again, involving the embedding of id values as literals. But obviously that's unreasonable.
There are two possible solutions here.
1: order by with limit and offset
Take a look at the PostgreSQL documentation on limit and offset. First, just to reinforce the point about lack of order, take note of the following paragraphs:
When using LIMIT, it is important to use an ORDER BY clause that constrains the result rows into a unique order. Otherwise you will get an unpredictable subset of the query's rows. You might be asking for the tenth through twentieth rows, but tenth through twentieth in what ordering? The ordering is unknown, unless you specified ORDER BY.
The query optimizer takes LIMIT into account when generating query plans, so you are very likely to get different plans (yielding different row orders) depending on what you give for LIMIT and OFFSET. Thus, using different LIMIT/OFFSET values to select different subsets of a query result will give inconsistent results unless you enforce a predictable result ordering with ORDER BY. This is not a bug; it is an inherent consequence of the fact that SQL does not promise to deliver the results of a query in any particular order unless ORDER BY is used to constrain the order.
Now, this solution requires that you specify an order by clause that fully orders the result-set. An order by clause that only partially orders the result-set will not be enough, since it will still leave room for some unpredictability and randomness.
Once you have the order by clause, you can then repeat the query with the same limit value and increasing offset values.
Something like this:
select * from table1 order by id1, id2, ... limit 5000000 offset 0;
select * from table1 order by id1, id2, ... limit 5000000 offset 5000000;
select * from table1 order by id1, id2, ... limit 5000000 offset 10000000;
...
2: synthesize a numbering column and filter on it
It is possible to add a column to the select clause which will provide a full order for the result-set. By wrapping this SQL in a subquery, you can then filter on the new column and thereby achieve your own pagination of the data. In fact, this solution is potentially slightly more powerful, since you could theoretically select discontinuous subsets of records, although I've never seen anyone actually do that.
To compute the ordering column, you can use the row_number() partition function.
Importantly, you will still have to specify id columns by which to order the partition. This is unavoidable under any conceivable solution; there always must be some deterministic, predictable record order to guide stateless paging through data.
Something like this:
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>0 and rn<=5000000;
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>5000000 and rn<=10000000;
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>10000000 and rn<=15000000;
...
Obviously, this solution is more complicated and verbose than the previous one. And the previous solution might allow for performance optimizations not possible under the more manual approach of partitioning and filtering. Hence I would recommend the previous solution.
My above discussion focuses on PostgreSQL, but other DBMSs should provide equivalent features. For example, for SQL Server, see Equivalent of LIMIT and OFFSET for SQL Server?, which shows an example of the synthetic numbering solution, and also indicates that (at least as of SQL Server 2012) you can use OFFSET {offset} ROWS and FETCH NEXT {limit} ROWS ONLY to achieve limit/offset functionality.

Oracle Insert Select with order by

I am working on a plsql procedure where i am using an insert-select statement.
I need to insert into the table in ordered manner. but the order by i used in the select sql is not working.
is there any specific way in oracle to insert rows in orderly fashion?
The use of an ORDER BY within an INSERT SELECT is not pointless as long as it can change the content of the inserted data, i.e. with a sequence NEXTVAL included in the SELECT clause. And this even if the inserted rows won't be sorted when fetched - that's the role of your ORDER BY clause in your SELECT clause when accessing the rows.
For such a goal, you can use a work-around placing your ORDER BY clause in a sub-query, and it works:
INSERT INTO myTargetTable
(
SELECT mySequence.nextval, sq.* FROM
( SELECT f1, f2, f3, ...fx
FROM mySourceTable
WHERE myCondition
ORDER BY mySortClause
) sq
)
The typical use case for an ordered insert is in order to co-locate particular value in the same blocks (effectively reducing the clustering factor on indexes on columns by which you have ordered the data).
This generally requires a direct path insert ...
insert /*+ append */ into ...
select ...
from ...
order by ...
There's nothing invalid about this as long as you accept that it's only worthwhile for bulk data, that the data will load above the high water mark only, and that there are locking issues involved.
Another approach which achieves mostly the same effect, but which is more arguably more suitable for OLTP systems, is to create the table in a cluster.
The standard Oracle table is a heap-organized table. A heap-organized table is a table with rows stored in no particular order.
Sorting has nothing to do while inserting rows. and is completely pointless. You need an ORDER BY only while projecting/selecting the rows.
That is how the Oracle RDBMS is designed.
I'm pretty sure that Oracle does not guarantee to insert rows to a table in any specific order (even if the rows were inserted in that order).
Performance and storage considerations far outweigh ordering considerations (as every user might have a different preference for order)
Why not just use an "ORDER BY" clause in your SELECT statement?
Or better yet, create a VIEW that already has the ORDER BY clause in it?
CREATE VIEW your_table_ordered
SELECT *
FROM your_table
ORDER BY your_column

With multiple sql order by clause, is all order bys run even if earlier order by has proved that the rows are not equal?

In a SQL query with multiple order by clauses, is all of them really run during execution?
Example:
select * from my_table
order by field5, field3, field2
If the the list after execution of 'order by' field5 and field3 has a unique list whit only one combination of field5 and field3, is 'order by field2' still run during execution of SQL query? Or is, in my case SQL Server, smart enough to see this and skip the last step?
I'm asking because I am writing a stored procedure where I have a list where I'm most of the time only would need to order by to or three columns, but in some cases I would like to order by a last column if necessary, but this will be an alpha numeric sorting and this will slow down the query, so of course I would like to avoid it as much as possible...
The extra column on the end of the sort will have a negligible impact on the speed of the query.
If you can, creating a compound index as previously suggested is probably not a bad idea:
create index my_index on my_table (field5, field3, field2);
I would be astounded if the internal sort implementation didn't make the optimization your talking about anyway, that's data structures and algorithms 101.
Be warned though, there are situations where an index here would make things worse, if you have large table churn on a table with many tuples for example, and if you have a table with few columns to start with, the optimizer will just do a full table scan anyway because it'd be faster.
Most likely yes, I can't see how SQL Server would know if there are multiple rows for the last column other than actually reading them.
A better way to optimize this would be to add an index for the the columns you have in your order by, sorted in the same way.

Fast Way To Estimate Rows By Criteria

I have seen a few posts detailing fast ways to "estimate" the number of rows in a given SQL table without using COUNT(*). However, none of them seem to really solve the problem if you need to estimate the number of rows which satisfy a given criteria. I am trying to get a way of estimating the number of rows which satisfy a given criteria, but the information for these criteria is scattered around two or three tables. Of course a SELECT COUNT(*) with the NOLOCK hint and a few joins will do, and I can afford under- or over-estimating the total records. The probem is that this kind of query will be running every 5-10 minutes or so, and since I don't need the actual number-only an estimate-I would like to trade-off accuracy for speed.
The solution, if any, may be "SQL Server"-specific. In fact, it must be compatible with SQL Server 2005. Any hints?
There is no easy way to do this. You can get an estimate for the total number of rows in a table, e.g. from system catalog views.
But there's no way to do this for a given set of criteria in a WHERE clause - either you would have to keep counts for each set of criteria and the values, or you'd have to use black magic to find that out. The only place that SQL Server keeps something that would go into that direction is the statistics it keeps on the indices. Those will have certain information about what kind of values occur how frequently in an index - but I quite honestly don't have any idea if (and how) you could leverage the information in the statistics in your own queries......
If you really must know the number of rows matching a certain criteria, you need to do a count of some sort - either a SELECT COUNT(*) FROM dbo.YourTable WHERE (yourcriteria) or something else.
Something else could be something like this:
wrap your SELECT statement into a CTE (Common Table Expression)
define a ROW_NUMBER() in that CTE ordering your data by some column (or set of columns)
add a second ROW_NUMBER() to that CTE that orders your data by the same column (or columns) - but in the opposite direction (DESC vs. ASC)
Something like this:
;WITH YourDataCTE AS
(
SELECT (list of columns you need),
ROW_NUMBER() OVER(ORDER BY <your column>) AS 'RowNum',
ROW_NUMBER() OVER(ORDER BY <your column> DESC) AS 'RowNum2'
FROM
dbo.YourTable
WHERE
<your conditions here>
)
SELECT *
FROM YourDataCTE
Doing this, you would get the following effect:
your first row in your result set will contain your usual data columns
the first ROW_NUMBER() will contain the value 1
the second ROW_NUMBER() will contain the total number of row that match that criteria set
It's surprisingly good at dealing with small to mid-size result sets - I haven't tried yet how it'll hold up with really large result sets - but it might be something to investigate and see if it works.
Possible solutions:
If the count number is big in comparison to the total number of rows in the table, then adding indexes that cover where condition will help and the query will be very fast.
If the result number is close to the total number of rows in the table, indexes will not help much. You could implement a trigger that would maintain a 'conditional count table'. So whenever row matching condition added you would increment the value in the table, and when row is deleted you would decrement the value. So you will query this small 'summary count table'.