Conditional offset on union select - sql

Consider we have complex union select from dozens of tables with different structure but similar fields meaning:
SELECT a1.abc as field1,
a1.bcd as field2,
a1.date as order_date,
FROM a1_table a1
UNION ALL
SELECT a2.def as field1,
a2.fff as field2,
a2.ts as order_date,
FROM a2_table a2
UNION ALL ...
ORDER BY order_date
Notice also that results in general are sorted by "synthetic" field order_date.
This query gives huge number of rows, and we want to work with pages from this set of rows. Each page is defined by two parameters:
page size
field2 value of last item from previous page
Most important thing that we can not change the way can page is defined. I.e. it is not possible to use row number of date of last item from previous page: only field2 value is acceptable.
Current algorithm of paging is implemented in quite ugly way:
1) query above is wrapped in additional select with row_number() additional column and then wrapped in stored procedure union_wrapper which returns appropriate
table ( field1 ..., field2 character varying),
2) then complex select performed:
RETURN QUERY
with tmp as (
select
rownum, field1, field2 from union_wrapper()
)
SELECT field1, field2
FROM tmp
WHERE rownum > (SELECT rownum
FROM tmp
WHERE field2 = last_field_id
LIMIT 1)
LIMIT page_size
The problem is that we have to build in memory full union-select results in order to later detect row number from which we want to cut new page. This is quite slow and takes unacceptable much time to perform.
Is any way to reconfigure this operations in order to significantly reduce query complexity and increase its speed?
And again: we can not change condition of paging, we can not change structure of the tables. Only way of rows retrieving.
UPD: I also can not use temp tables, because I'm working in read-replica of the database.

You have successfully maneuvered yourself into a tight spot. The query and its ORDER BY expression contradict your paging requirements.
ORDER BY order_date is not a deterministic sort order (there could be multiple rows with the same order_date) - which you need before you do anything else here. And field2 does not seem to be unique either. You need both: Define a deterministic sort order and a unique indicator for page end / start. Ideally, the indicator matches the sort order. Could be (order_date, field2), which both columns defined NOT NULL, and the combination UNIQUE. Your restriction "only field2 value is acceptable" contradicts your query.
That's all before thinking about how to get best performance ...
There are proven solutions with row values and multi-column indexes for paging:
Optimize query with OFFSET on large table
But drawing from a combination of multiple source tables complicates matters. Optimization depends on the details of your setup.
If you can't get the performance you need, your only remaining alternative is to materialize the query results somehow. Temp table, cursor, materialized view - the best tool depends on details of your setup.
Of course, general performance tuning might help, too.

Related

Splitting large table into 2 dataframes via JDBC connection in RStudio

Through R I connect to a remotely held database. The issue I have is my hardware isn't so great and the dataset contains tens of millions of rows with about 10 columns per table. When I run the below code, at the df step, I get a "Not enough RAM" error from R:
library(DatabaseConnector)
conn <- connect(connectionDetails)
df <- querySql(conn,"SELECT * FROM Table1")
What I thought about doing was splitting the tables into two parts any filter/analyse/combine as needed going forward. I think because I use the conn JDBC conection I have to use SQL syntax to make it work. With SQL, I start with the below code:
df <- querySql(conn,"SELECT TOP 5000000 FROM Table1")
And then where I get stuck is how do I create a second dataframe starting with n - 5000000 rows and ending at the final row, retrieved from Table1.
I'm open to suggestions but I think there are two potential answers to this question. The first is to work within the querySql to get it working. The second is to use an R function other than querySql (no idea what this would look like). I'm limited to R due to work environment.
The SQL statement
SELECT TOP 5000000 * from Table1
is not doing what you think it's doing.
Relational tables are conceptually unordered.
A relation is defined as a set of n-tuples. In both mathematics and the relational database model, a set is an unordered collection of unique, non-duplicated items, although some DBMSs impose an order to their data.
Selecting from a table produces a result-set. Result-sets are also conceptually unordered unless and until you explicitly specify an order for them, which is generally done using an order by clause.
When you use a top (or limit, depending on the DBMS) clause to reduce the number of records to be returned by a query (let's call these the "returned records") below the number of records that could be returned by that query (let's call these the "selected records") and if you have not specified an order by clause, then it is conceptually unpredictable and random which of the selected records will be chosen as the returned records.
Since you have not specified an order by clause in your query, you are effectively getting 5,000,000 unpredictable and random records from your table. Every single time you run the query you might get a different set of 5,000,000 records (conceptually, at least).
Therefore, it doesn't make sense to ask about how to get a second result-set "starting with n - 5000000 and ending at the final row". There is no n, and there is no final row. The choice of returned records was not deterministic, and the DBMS does not remember such choices of past queries. The only conceivable way such information could be incorporated into a subsequent query would be to explicitly include it in the SQL, such as by using a not in condition on an id column and embedding id values from the first query as literals, or doing some kind of negative join, again, involving the embedding of id values as literals. But obviously that's unreasonable.
There are two possible solutions here.
1: order by with limit and offset
Take a look at the PostgreSQL documentation on limit and offset. First, just to reinforce the point about lack of order, take note of the following paragraphs:
When using LIMIT, it is important to use an ORDER BY clause that constrains the result rows into a unique order. Otherwise you will get an unpredictable subset of the query's rows. You might be asking for the tenth through twentieth rows, but tenth through twentieth in what ordering? The ordering is unknown, unless you specified ORDER BY.
The query optimizer takes LIMIT into account when generating query plans, so you are very likely to get different plans (yielding different row orders) depending on what you give for LIMIT and OFFSET. Thus, using different LIMIT/OFFSET values to select different subsets of a query result will give inconsistent results unless you enforce a predictable result ordering with ORDER BY. This is not a bug; it is an inherent consequence of the fact that SQL does not promise to deliver the results of a query in any particular order unless ORDER BY is used to constrain the order.
Now, this solution requires that you specify an order by clause that fully orders the result-set. An order by clause that only partially orders the result-set will not be enough, since it will still leave room for some unpredictability and randomness.
Once you have the order by clause, you can then repeat the query with the same limit value and increasing offset values.
Something like this:
select * from table1 order by id1, id2, ... limit 5000000 offset 0;
select * from table1 order by id1, id2, ... limit 5000000 offset 5000000;
select * from table1 order by id1, id2, ... limit 5000000 offset 10000000;
...
2: synthesize a numbering column and filter on it
It is possible to add a column to the select clause which will provide a full order for the result-set. By wrapping this SQL in a subquery, you can then filter on the new column and thereby achieve your own pagination of the data. In fact, this solution is potentially slightly more powerful, since you could theoretically select discontinuous subsets of records, although I've never seen anyone actually do that.
To compute the ordering column, you can use the row_number() partition function.
Importantly, you will still have to specify id columns by which to order the partition. This is unavoidable under any conceivable solution; there always must be some deterministic, predictable record order to guide stateless paging through data.
Something like this:
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>0 and rn<=5000000;
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>5000000 and rn<=10000000;
select * from (select *, row_number() over (id1, id2, ...) rn from table1) t1 where rn>10000000 and rn<=15000000;
...
Obviously, this solution is more complicated and verbose than the previous one. And the previous solution might allow for performance optimizations not possible under the more manual approach of partitioning and filtering. Hence I would recommend the previous solution.
My above discussion focuses on PostgreSQL, but other DBMSs should provide equivalent features. For example, for SQL Server, see Equivalent of LIMIT and OFFSET for SQL Server?, which shows an example of the synthetic numbering solution, and also indicates that (at least as of SQL Server 2012) you can use OFFSET {offset} ROWS and FETCH NEXT {limit} ROWS ONLY to achieve limit/offset functionality.

Oracle slow RANK function

My application uses views that must be kept generic (no filters), and which include analytic functions RANK and DENSE_RANK. For example I have a view MYVIEW:
SELECT
RANK() OVER (PARTITION BY FIELD1 ORDER BY FIELD2) RANK,
FIELD2,
FIELD3
FROM TABLE1;
My application then applies the necessary filters at runtime i.e.
SELECT * FROM MYVIEW WHERE FIELD3 IN ('a','b','c');
My query is very fast without the RANK function, but painfully slow (2+ minutes) with it (I get the right results, just slow). The underlying table has 250,000+ rows and I have no control over its design. I cannot partition it any further. So is it slow because it creates partitions for every unique entry in FIELD1 every time the view is called? Any other way to avoid that? Any suggestions on how to make this faster?
As was mentioned in the comments, with your analytic function in the view, Oracle can't take any shortcuts (predicate pushing) because
in your view, you have created an agreement with Oracle: whenever the view is accessed the RANK should be based on all of the rows in the table - no WHERE clause was specified
when querying a view, an "external" WHERE clause should never affect how a row generated by the view looks, but only whether or not that row is kept or not
analytic functions look at other rows to generate a value so if you change those rows (filtering) you can change the value - pushing a predicate could easily affect the values generated by these functions
if this could happen, your view result could become very inconsistent (just depending on how the optimizer chose to evaluate the query)
So, based on the details you've provided, your query needs to be evaluated like this:
SELECT *
FROM (
SELECT
RANK() OVER (PARTITION BY FIELD1 ORDER BY FIELD2) RANK,
FIELD2,
FIELD3
FROM TABLE1
) myview
WHERE <condition>; -- rankings are not affected by external conditions
and not this:
SELECT * FROM (
SELECT
RANK() OVER (PARTITION BY FIELD1 ORDER BY FIELD2) RANK,
FIELD2,
FIELD3
FROM TABLE1
WHERE FIELD3 IN ('a','b','c') -- ranking is affected by the conditions
)
So, is there a way to make this faster? Maybe.
If the table is partitioned, there's the thought of using parallel query.
Could an index help?
Not in the usual sense. Since there are no conditions in the view itself, it will do a full table scan to consider all of the rows for the rankings and by the time the WHERE clause is applied, it's too late to use an index for filtering.
However, if you had an index that "covered" the query, that is, have an index on just the columns being used (e.g. FIELD1, FIELD2, FIELD3 in that order), an index could be used as a smaller version of the table (instead of FULL TABLE SCAN the plan would show INDEX FAST FULL SCAN.) As a bonus, since it's already sorted, it could be efficient at working out the partitions on FIELD1 and then ordering on FIELD2 within each partition.
Another option would be to make this a materialized view instead, but if your data is changing often, it could be a pain to keep current.
One final thought would be something that is similar to the "poor man's" partitioning used before the days of the partitioning option.
(Sorry I can't find a good link that describes this, but maybe you have heard of it before.)
This is really only an option if:
your partitioning column has a relatively small number of distinct values
those values don't change
you know what partition values you can use to isolate the data on in your query
Oracle is willing to push the predicate when it's safe to do so
Given that Oracle seems adverse to pushing the predicate when analytic functions are involved, I'm not giving this a high probability of success.
If you want more info on that, let me know.

With multiple sql order by clause, is all order bys run even if earlier order by has proved that the rows are not equal?

In a SQL query with multiple order by clauses, is all of them really run during execution?
Example:
select * from my_table
order by field5, field3, field2
If the the list after execution of 'order by' field5 and field3 has a unique list whit only one combination of field5 and field3, is 'order by field2' still run during execution of SQL query? Or is, in my case SQL Server, smart enough to see this and skip the last step?
I'm asking because I am writing a stored procedure where I have a list where I'm most of the time only would need to order by to or three columns, but in some cases I would like to order by a last column if necessary, but this will be an alpha numeric sorting and this will slow down the query, so of course I would like to avoid it as much as possible...
The extra column on the end of the sort will have a negligible impact on the speed of the query.
If you can, creating a compound index as previously suggested is probably not a bad idea:
create index my_index on my_table (field5, field3, field2);
I would be astounded if the internal sort implementation didn't make the optimization your talking about anyway, that's data structures and algorithms 101.
Be warned though, there are situations where an index here would make things worse, if you have large table churn on a table with many tuples for example, and if you have a table with few columns to start with, the optimizer will just do a full table scan anyway because it'd be faster.
Most likely yes, I can't see how SQL Server would know if there are multiple rows for the last column other than actually reading them.
A better way to optimize this would be to add an index for the the columns you have in your order by, sorted in the same way.

Selecting data effectively sql

I have a very large table with over 1000 records and 200 columns. When I try to retreive records matching some criteria in the WHERE clause using SELECT statement it takes a lot of time. But most of the time I just want to select a single record that matches the criteria in the WHERE clause rather than all the records.
I guess there should be a way to select just a single record and exit which would minimize the retrieval time. I tried ROWNUM=1 in the WHERE clause but it didn't really work because I guess the engine still checks all the records even after finding the first record matching the WHERE criteria. Is there a way to optimize in case if I want to select just a few records?
Thanks in advance.
Edit:
I am using oracle 10g.
The Query looks like,
Select *
from Really_Big_table
where column1 is NOT NULL
and column2 is NOT NULL
and rownum=1;
This seems to work slower than the version without rownum=1;
rownum is what you want, but you need to perform your main query as a subquery.
For example, if your original query is:
SELECT co1, col2
FROM table
WHERE condition
then you should try
SELECT *
FROM (
SELECT col1, col2
FROM table
WHERE condition
) WHERE rownum <= 1
See http://www.oracle.com/technology/oramag/oracle/06-sep/o56asktom.html for details on how rownum works in Oracle.
1,000 records isn't a lot of data in a table. 200 columns is a reasonably wide table. For this reason, I'd suggest you aren't dealing with a really big table - I've performed queries against millions of rows with no problems.
Here is a little experiment... how long does it take to run this compared to the "SELECT *" query?
SELECT
Really_Big_table.Id
FROM
Really_Big_table
WHERE
column1 IS NOT NULL
AND
column2 IS NOT NULL
AND
rownum=1;
An example is here: You can view more here
SELECT ename, sal
FROM ( SELECT ename, sal, RANK() OVER (ORDER BY sal DESC) sal_rank
FROM emp )
WHERE sal_rank <= 1;
You also have to do some column indexing for column in the WHERE clause
In SQL most of the optimization would come in the form on index on the table (where you would index the columns that appear in the WHERE and ORDER BY columns as a rough guide).
You did not specify what SQL database you are using, so I can't point to a good resource.
Here is an introduction to indexes on Oracle.
Here another tutorial.
As for queries - you should always specify the columns you are returning and not use a blanket *.
it shouldn't take a lot of time to query a 1000 rows table. There are exceptions however, check if you are in one of the following cases:
1. Lots of rows were deleted
The table had a massive amount of rows in the past. Since the High Water Mark (HWM) is still high (delete won't lower it) and FULL TABLE SCAN read all the data up to the high water mark, it may take a lot of time to return results even if the table is now nearly empty.
Analyse your table (dbms_stats.gather_table_stats('<owner>','<table>')) and compare the space actually used by the rows (space on disk) with the effective space (data), for example:
SELECT t.avg_row_len * t.num_rows data_bytes,
(t.blocks - t.empty_blocks) * ts.block_size bytes_used
FROM user_tables t
JOIN user_tablespaces ts ON t.tablespace_name = ts.tablespace_name
WHERE t.table_name = '<your_table>';
You will have to take into account the overhead of the rows and blocks as well as the space reserved for update (PCT_FREE). If you see that you use a lot more space than required (typical overhead is below 30%, YMMV) you may want to reset the HWM, either:
ALTER TABLE <your_table> MOVE; and then rebuild INDEX (ALTER INDEX <index> REBUILD), don't forget to collect stats afterwards.
use DBMS_REDEFINITION
2. The table has very large columns
Check if you have columns of datatype LOB, CLOB, LONG (irk), etc. Data over 4000 bytes in any of these columns is stored out of line (in a separate segment), which means that if you don't select these columns you will only query the other smaller columns.
If you are in this case, don't use SELECT *. Either you don't need the data in the large columns or use SELECT rowid and then do a second query : SELECT * WHERE rowid = <rowid>.

Faster 'select distinct thing_id,thing_name from table1' in oracle

I have this query:
select distinct id,name from table1
For a given ID, the name will always be the same. Both fields are indexed. There's no separate table that maps the id to the name. The table is very large (10 of millions of rows), so the query could take some time.
This query is very fast, since it's indexed:
select distinct name from table1
Likewise for this query:
select distinct id from table1
Assuming I can't get the database structure changed (a very safe assumption) what's a better way to structure the first query for performance?
Edit to add a sanitized desc of the table:
Name Null Type
------------------------------ -------- ----------------------------
KEY NOT NULL NUMBER
COL1 NOT NULL NUMBER
COL2 NOT NULL VARCHAR2(4000 CHAR)
COL3 VARCHAR2(1000 CHAR)
COL4 VARCHAR2(4000 CHAR)
COL5 VARCHAR2(60 CHAR)
COL6 VARCHAR2(150 CHAR)
COL7 VARCHAR2(50 CHAR)
COL8 VARCHAR2(3 CHAR)
COL9 VARCHAR2(3 CHAR)
COLA VARCHAR2(50 CHAR)
COLB NOT NULL DATE
COLC NOT NULL DATE
COLD NOT NULL VARCHAR2(1 CHAR)
COLE NOT NULL NUMBER
COLF NOT NULL NUMBER
COLG VARCHAR2(600 CHAR)
ID NUMBER
NAME VARCHAR2(50 CHAR)
COLH VARCHAR2(3 CHAR)
20 rows selected
[LATEST EDIT]
My ORIGINAL ANSWER regarding creating the appropriate index on (name,id) to replace the index on (name) is below. (That wasn't an answer to the original question, which disallowed any database changes.)
Here are statements that I have not yet tested. There's probably some obvious reason these won't work. I'd never actually suggest writing statements like this (at the risk of being drummed thoroughly for such ridiculous suggestion.)
If these queries even return result sets, the ressult set will only resemble the result set from the OP query, almost by accident, taking advantage of a quirky guarantee about the data that Don has provided us. This statement is NOT equivalent to the original SQL, these statements are designed for the special case as described by Don.
select m1.id
, m2.name
from (select min(t1.rowid) as min_rowid
, t1.id
from table1 t1
where t1.id is not null
group by t1.id
) m1
, (select min(t2.rowid) as min_rowid
, t2.name from table1 t2
where t2.name is not null
group by t2.name
) m2
where m1.min_rowid = m2.min_rowid
order
by m1.id
Let's unpack that:
m1 is an inline view that gets us a list of distinct id values.
m2 is an inline view that gets us a list of distinct name values.
materialize the views m1 and m2
match the ROWID from m1 and m2 to match id with name
Someone else suggested the idea of an index merge. I had previously dismissed that idea, an optimizer plan to match 10s of millions of rowids without eliminating any of them.
With sufficiently low cardinality for id and name, and with the right optimizer plan:
select m1.id
, ( select m2.name
from table1 m2
where m2.id = m1.id
and rownum = 1
) as name
from (select t1.id
from table1 t1
where t1.id is not null
group by t1.id
) m1
order
by m1.id
Let's unpack that
m1 is an inline view that gets us a list of distinct id values.
materialize the view m1
for each row in m1, query table1 to get the name value from a single row (stopkey)
IMPORTANT NOTE
These statements are FUNDAMENTALLY different that the OP query. They are designed to return a DIFFERENT result set than the OP query. The happen to return the desired result set because of a quirky guarantee about the data. Don has told us that a name is determined by id. (Is the converse true? Is id determined by name? Do we have a STATED GUARANTEE, not necessarily enforced by the database, but a guarantee that we can take advantage of?) For any ID value, every row with that ID value will have the same NAME value. (And we are also guaranteed the converse is true, that for any NAME value, every row with that NAME value will have the same ID value?)
If so, maybe we can make use of that information. If ID and NAME appear in distinct pairs, we only need to find one particular row. The "pair" is going to have a matching ROWID, which conveniently happens to be available from each of the existing indexes. What if we get the minimum ROWID for each ID, and get the minimum ROWID for each NAME. Couldn't we then match the ID to the NAME based on the ROWID that contains the pair? I think it might work, given a low enough cardinality. (That is, if we're dealing with only hundreds of ROWIDs rather than 10s of millions.)
[/LATEST EDIT]
[EDIT]
The question is now updated with information concerning the table, it shows that the ID column and the NAME column both allow for NULL values. If Don can live without any NULLs returned in the result set, then adding the IS NOT NULL predicate on both of those columns may enable an index to be used. (NOTE: in an Oracle (B-Tree) index, NULL values do NOT appear in the index.)
[/EDIT]
ORIGINAL ANSWER:
create an appropriate index
create index table1_ix3 on table_1 (name,id) ... ;
Okay, that's not the answer to the question you asked, but it's the right answer to fixing the performance problem. (You specified no changes to the database, but in this case, changing the database is the right answer.)
Note that if you have an index defined on (name,id), then you (very likely) don't need an index on (name), sine the optimizer will consider the leading name column in the other index.
(UPDATE: as someone more astute than I pointed out, I hadn't even considered the possibility that the existing indexes were bitmap indexes and not B-tree indexes...)
Re-evaluate your need for the result set... do you need to return id, or would returning name be sufficient.
select distinct name from table1 order by name;
For a particular name, you could submit a second query to get the associated id, if and when you needed it...
select id from table1 where name = :b1 and rownum = 1;
If you you really need the specified result set, you can try some alternatives to see if the performance is any better. I don't hold out much hope for any of these:
select /*+ FIRST_ROWS */ DISTINCT id, name from table1 order by id;
or
select /*+ FIRST_ROWS */ id, name from table1 group by id, name order by name;
or
select /*+ INDEX(table1) */ id, min(name) from table1 group by id order by id;
UPDATE: as others have astutely pointed out, with this approach we're testing and comparing performance of alternative queries, which is a sort of hit or miss approach. (I don't agree that it's random, but I would agree that it's hit or miss.)
UPDATE: tom suggests the ALL_ROWS hint. I hadn't considered that, because I was really focused on getting a query plan using an INDEX. I suspect the OP query is doing a full table scan, and it's probably not the scan that's taking the time, it's the sort unique operation (<10g) or hash operation (10gR2+) that takes the time. (Absent timed statistics and event 10046 trace, I'm just guessing here.) But then again, maybe it is the scan, who knows, the high water mark on the table could be way out in a vast expanse of empty blocks.
It almost goes without saying that the statistics on the table should be up-to-date, and we should be using SQL*Plus AUTOTRACE, or at least EXPLAIN PLAN to look at the query plans.
But none of the suggested alternative queries really address the performance issue.
It's possible that hints will influence the optimizer to chooze a different plan, basically satisfying the ORDER BY from an index, but I'm not holding out much hope for that. (I don't think the FIRST_ROWS hint works with GROUP BY, the INDEX hint may.) I can see the potential for such an approach in a scenario where there's gobs of data blocks that are empty and sparsely populated, and ny accessing the data blocks via an index, it could actually be significantly fewer data blocks pulled into memory... but that scenario would be the exception rather than the norm.
UPDATE: As Rob van Wijk points out, making use of the Oracle trace facility is the most effective approach to identifying and resolving performance issues.
Without the output of an EXPLAIN PLAN or SQL*Plus AUTOTRACE output, I'm just guessing here.
I suspect the performance problem you have right now is that the table data blocks have to be referenced to get the specified result set.
There's no getting around it, the query can not be satisfied from just an index, since there isn't an index that contains both the NAME and ID columns, with either the ID or NAME column as the leading column. The other two "fast" OP queries can be satisfied from index without need reference the row (data blocks).
Even if the optimizer plan for the query was to use one of the indexes, it still has to retrieve the associated row from the data block, in order to get the value for the other column. And with no predicate (no WHERE clause), the optimizer is likely opting for a full table scan, and likely doing a sort operation (<10g). (Again, an EXPLAIN PLAN would show the optimizer plan, as would AUTOTRACE.)
I'm also assuming here (big assumption) that both columns are defined as NOT NULL.
You might also consider defining the table as an index organized table (IOT), especially if these are the only two columns in the table. (An IOT isn't a panacea, it comes with it's own set of performance issues.)
You can try re-writing the query (unless that's a database change that is also verboten) In our database environments, we consider a query to be as much a part of the database as the tables and indexes.)
Again, without a predicate, the optimizer will likely not use an index. There's a chance you could get the query plan to use one of the existing indexes to get the first rows returned quickly, by adding a hint, test a combination of:
select /*+ INDEX(table1) */ ...
select /*+ FIRST_ROWS */ ...
select /*+ ALL_ROWS */ ...
distinct id, name from table1;
distinct id, name from table1 order by id;
distinct id, name from table1 order by name;
id, name from table1 group by id, name order by id;
id, min(name) from table1 group by id order by id;
min(id), name from table1 group by name order by name;
With a hint, you may be able to influence the optimizer to use an index, and that may avoid the sort operation, but overall, it make take more time to return the entire result set.
(UPDATE: someone else pointed out that the optimizer might choose to merge two indexes based on ROWID. That's a possibility, but without a predicate to eliminate some rows, that's likely going to be a much more expensive approach (matching 10s of millions ROWIDs) from two indexes, especially when none of the rows are going to be excluded on the basis of the match.)
But all that theorizing doesn't amount to squat without some performance statistics.
Absent altering anything else in the database, the only other hope (I can think of) of you speeding up the query is to make sure the sort operation is tuned so that the (required) sort operation can be performed in memory, rather than on disk. But that's not really the right answer. The optimizer may not be doing a sort operation at all, it may be doing a hash operation (10gR2+) instead, in which case, that should be tuned. The sort operation is just a guess on my part, based on past experience with Oracle 7.3, 8, 8i, 9i.)
A serious DBA is going to have more issue with you futzing with the SORT_AREA_SIZE and/or HASH_AREA_SIZE parameters for your session(s) than he will in creating the correct indexes. (And those session parameters are "old school" for versions prior to 10g automatic memory management magic.)
Show your DBA the specification for the result set, let the DBA tune it.
A query cannot be tuned by looking at it, or randomly suggesting some equivalent queries, regardless how well meant they are.
You, we or the optimizer needs to know statistics about your data. And then you can measure with tools like EXPLAIN PLAN or SQLTrace/tkprof or even the simple autotrace tool from SQLPlus.
Can you show us the output of this:
set serveroutput off
select /*+ gather_plan_statistics */ distinct id,name from table1;
select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
And how does your entire table1 look like? Please show a describe output.
Regards,
Rob.
"The table is very large (10 of millions of rows)"
If you can't change the database (add index etc). Then your query will have no choice but to read the entire table. So firstly, determine how long that takes (ie time the SELECT ID,NAME FROM TABLE1). You won't get it any quicker than that.
The second step it has to do is the DISTINCT. In 10g+ that should use a HASH GROUP BY. Prior to that it is a SORT operation. The former is quicker. If your database is 9i, then you MAY get an improvement by copying the 10 million rows into a 10g database and doing it there.
Alternatively, allocate gobs of memory (google ALTER SESSION SET SORT_AREA_SIZE). That may harm other processes on the database, but then your DBAs aren't giving you much option.
You could try this:
select id, max(name) from table1 group by id
This uses the index on id for sure, but you have to try if it performs fast.
Without wishing to indulge in the practice of throwing stuff at the wall until something sticks, try this:
select id, name from table1 group by id, name
I have vague memories of a GROUP BY being inexplicably quicker than a DISTINCT.
Why do you need to even have "name" in the clause if the name is always the same for a given id? (nm...you want the name you aren't just checking for existence)
SELECT name, id FROM table WHERE id in (SELECT DISTINCT id FROM table)?
Don't know if that helps...
Is id unique? If so, you could drop DISTINCT from the query. If not - maybe it needs a new name? Yeah, I know, can't change the schema...
You could try something like
Select Distinct t1.id, t2.name
FROM (Select Distinct ID From Table) As T1
INNER JOIN table t2 on t1.id=t2.id
Select distinct t1.id, t2.name from table t1
inner Join table t2 on t1.id=t2.id
Not sure if this will work out slower or faster than the original as I'm not completely understanding how your table is set up. If each ID will always have the same name, and ID is unique, I don't really see the point of the distinct.
Really try to work something out with the DBAs. Really. Attempt to communicate the benefits and ease their fears of degraded performance.
Got a development environment/database to test this stuff?
How timely must the data be?
How about a copy of the table already grouped by id and name with proper indexing? A batch job could be configured to refresh your new table once a night.
But if that doesn't work out...
How about exporting all of the id and name pairs to an alternate database where you can group and index to your benefit and leave the DBAs with all of their smug rigidness?
This may perform better. It assumes that, as you said, the name is always the same for a given id.
WITH id_list AS (SELECT DISTINCT id FROM table1)
SELECT id_list.id, (SELECT name FROM table1 WHERE table1.id = id_list.id AND rownum = 1)
FROM id_list;
If for a given id the same name is always returned, you can run the following:
SELECT (
SELECT name
FROM table1
WHERE id = did
AND rownum = 1
)
FROM (
SELECT DISTINCT id AS did
FROM table1
WHERE id IS NOT NULL
)
Both queries will use the index on id.
If you still need the NULL values, run this:
SELECT (
SELECT name
FROM table1
WHERE id = did
AND rownum = 1
)
FROM (
SELECT DISTINCT id AS did
FROM table1
WHERE id IS NOT NULL
)
UNION ALL
SELECT NULL, name
FROM table1
WHERE id IS NULL
AND rownum = 1
This will be less efficient, since the second query doesn't use indexes, but it will stop on the first NULL it encounters: if it's close to the beginning of the tables, then you're lucky.
See the entry in my blog for performance details:
Distinct pairs