I am new in Oracle (working on 11gR2). I have a table TABLE with something like ~10 millions records in it, and this pretty simple query :
SELECT t.col1, t.col2, t.col3, t.col4, t.col5, t.col6, t.col7, t.col8, t.col9, t.col10
FROM TABLE t
WHERE t.col1 = val1
AND t.col11 = val2
AND t.col12 = val3
AND t.col13 = val4
The query is currently taking about 30s/1min.
My question is: how can I improve performance ? After a lot of research, I am aware of the most classical ways to improve performance but I have some problems :
Partitioning: can't really, the table is used in an other project and it would be too impactful. Plus it only delay the problem given the number of rows inserted in the table every day.
Add an index: The thing is, the columns used in the WHERE clause are not the one returned by the query (except for one). Thus, I have not been able to find an appropriate index yet. As far as I know, setting an index on 12~13 columns does not make a lot of sense (or does it?).
Materialized views: I must say I never used them, but I understood the maintenance cost is pretty high and my table is updated quite often.
I think the best way to do this would be to add an appropriate index, but I can't find the right columns on which it should be created.
An index makes sense provided that your query results in a small percentage of all rows. You would create one index on all four columns used in the WHERE clause.
If too many records match, then a full table scan will be done. You may be able to speed this up by having this done in parallel threads using the PARALLEL hint:
SELECT /*+parallel(t,4)*/
t.col1, t.col2, t.col3, t.col4, t.col5, t.col6, t.col7, t.col8, t.col9, t.col10
FROM TABLE t
WHERE t.col1 = val1 AND t.col11 = val2 AND t.col12 = val3 AND t.col13 = val4;
Table with 10 millions records is quite little table. You just need to create an appropriate index. Which column select for index - depends on content of them. For example, if you have column that contains only "1" and "0", or "yes" and "no", you shouldn't index it. The more different values contains column - the more effect gives index. Also you can make index on two or three (and more) columns, or function-based index (in this case index contains results of your SQL function, not columns values). Also you can create more than one index on table.
And in any case, if your query selects more then 20 - 30% of all table records, index will not help.
Also you said that table is used by many people. In this case, you need to cooperate with them to avoid duplicating indexes.
Indexes on each of the columns referenced in the WHERE clause will help performance of a query against a table with a large number of rows, where you are seeking a small subset, even if the columns in the WHERE clause are not returned in the SELECT column list.
The downside of course is that indexes impede insert/update performance. So when loading the table with large numbers of records, you might need to disable/drop the indexes prior to loading and then re-create/enable them again afterwards.
I have query that I am searching for a range of user accounts but every time I pass query I will be using multiple id number's first 5 digits and based on that I will searching. I wanted to know is there any other way to re-write this query for user Id range when we use more than 10 userIDs to search? Is there going to be huge performance hit with this kind of search in query?
example:
select A.col1,B.col2,B.col3
from table1 A,table2 B
where A.col2=B.col2
and (B.col_id like '12345%'
OR B.col_id like '47474%'
OR B.col_id like '59598%');
I am using Oracle11g.
Actually it is not important how many UserIDs you will pass to the query. The most considerable part is what is selectivity of your query. In other words: how many rows will return your query and how many rows are there in your tables. If the number of returned rows is relatively small then it is good idea to create an index on column B.col_id. There is also nothing bad in using OR condition. Basically each OR will add one more INDEX RANGE SCAN to the execution plan with final CONCATENATION (but you'd rather check your actual plan to be sure). If the total cost of all that operations are lower than full table scan then Oracle CBO will use your index. In other case if you select >=20-30% of data at once then full table scan is most likely to happen and you should even less be worried about OR because all data will be read and comparing each value with your multiple conditions won't add much overhead.
Generally the use of LIKE makes it impossible for Oracle to use indexes.
If the query is going to be reused, consider creating a synthetic column with the first 5 characters of COL_ID. Put a non-unique index on it. Put your search keys in a table and join that to that column.
There may be a way to do it with a functional index on the first 5 characters.
I don't know if the performance will be better or not, but another way to write this is with a union:
select A.col1,B.col2,B.col3
from table1 A,table2 B
where A.col2=B.col2
and (A.col_id like '12345%')
union all
select A.col1,B.col2,B.col3
from table1 A,table2 B
where A.col2=B.col2
and (A.col_id like '47474%') -- you did mean A.col_id again, right?
union all
select A.col1,B.col2,B.col3
from table1 A,table2 B
where A.col2=B.col2
and (A.col_id like '59598%'); -- and A.col_id here too?
I have a process that consolidates 40+ identically structured databases down to one consolidated database, the only difference being that the consolidated database adds a project_id field to each table.
In order to be as efficient as possible, I'm try to only copy/update a record from the source databases to the consolidated database if it's been added/changed. I delete outdated records from the consolidated database, and then copy in any non-existing records. To delete outdated/changed records I'm using a query similar to this:
DELETE FROM <table>
WHERE NOT EXISTS (SELECT <primary keys>
FROM <source> b
WHERE ((<b.fields = a.fields>) or
(b.fields is null and a.fields is null)))
AND PROJECT_ID = <project_id>
This works for the most part, but one of the tables in the source database has over 700,000 records, and this query takes over an hour to complete.
How can make this query more efficient?
Use timestamps or better yet audit tables to identify the records that changed since time "X" and then save time "X" when last sync started. We use that for interface feeds.
You might want to try LEFT JOIN with NULL filter:
DELETE <table>
FROM <table> t
LEFT JOIN <source> b
ON (t.Field1 = b.Field1 OR (t.Field1 IS NULL AND b.Field1 IS NULL))
AND(t.Field2 = b.Field2 OR (t.Field2 IS NULL AND b.Field2 IS NULL))
--//...
WHERE t.PROJECT_ID = <project_id>
AND b.PrimaryKey IS NULL --// any of the PK fields will do, but I really hope you do not use composite PKs
But if you are comparing all non-PK columns, then your query is going to suffer.
In this case it is better to add a UpdatedAt TIMESTAMP field (as DVK suggests) on both databases which you could update with the AFTER UPDATE trigger, then your sync procedure would be much faster, given that you create an index including PKs and UpdatedAt column.
You can reorder the WHERE statement; it has four comparisons, put the one most likely to fail first.
If you can alter the databases/application slightly, and you'll need to do this again, a bit field that says "updated" might not be a bad addition.
I usually rewrite queries like this to avoid the not...
Not In is horrible for performance, although Not Exists improves on this.
Check out this article, http://www.sql-server-pro.com/sql-where-clause-optimization.html
My suggestion...
Select out your pkey column into a working/temp table, add a column (flag) int default 0 not null, and index the pkey column. Mark flag =1 if record exists in your subquery (much quicker!).
Replace your sub select in your main query with an exists where (select pkey from temptable where flag=0)
What this works out to is being able to create a list of 'not exists' values that can be used inclusively from an all inclusive set.
Here's our total set.
{1,2,3,4,5}
Here's the existing set
{1,3,4}
We create our working table from these two sets (technically a left outer join)
(record:exists)
{1:1, 2:0, 3:1, 4:1, 5:0}
Our set of 'not existing records'
{2,5} (Select * from where flag=0)
Our product... and much quicker (indexes!)
{1,2,3,4,5} in {2,5} = {2,5}
{1,2,3,4,5} not in {1,3,4} = {2,5}
This can be done without a working table, but its use makes visualizing what's happening easier.
Kris
I have this query:
select distinct id,name from table1
For a given ID, the name will always be the same. Both fields are indexed. There's no separate table that maps the id to the name. The table is very large (10 of millions of rows), so the query could take some time.
This query is very fast, since it's indexed:
select distinct name from table1
Likewise for this query:
select distinct id from table1
Assuming I can't get the database structure changed (a very safe assumption) what's a better way to structure the first query for performance?
Edit to add a sanitized desc of the table:
Name Null Type
------------------------------ -------- ----------------------------
KEY NOT NULL NUMBER
COL1 NOT NULL NUMBER
COL2 NOT NULL VARCHAR2(4000 CHAR)
COL3 VARCHAR2(1000 CHAR)
COL4 VARCHAR2(4000 CHAR)
COL5 VARCHAR2(60 CHAR)
COL6 VARCHAR2(150 CHAR)
COL7 VARCHAR2(50 CHAR)
COL8 VARCHAR2(3 CHAR)
COL9 VARCHAR2(3 CHAR)
COLA VARCHAR2(50 CHAR)
COLB NOT NULL DATE
COLC NOT NULL DATE
COLD NOT NULL VARCHAR2(1 CHAR)
COLE NOT NULL NUMBER
COLF NOT NULL NUMBER
COLG VARCHAR2(600 CHAR)
ID NUMBER
NAME VARCHAR2(50 CHAR)
COLH VARCHAR2(3 CHAR)
20 rows selected
[LATEST EDIT]
My ORIGINAL ANSWER regarding creating the appropriate index on (name,id) to replace the index on (name) is below. (That wasn't an answer to the original question, which disallowed any database changes.)
Here are statements that I have not yet tested. There's probably some obvious reason these won't work. I'd never actually suggest writing statements like this (at the risk of being drummed thoroughly for such ridiculous suggestion.)
If these queries even return result sets, the ressult set will only resemble the result set from the OP query, almost by accident, taking advantage of a quirky guarantee about the data that Don has provided us. This statement is NOT equivalent to the original SQL, these statements are designed for the special case as described by Don.
select m1.id
, m2.name
from (select min(t1.rowid) as min_rowid
, t1.id
from table1 t1
where t1.id is not null
group by t1.id
) m1
, (select min(t2.rowid) as min_rowid
, t2.name from table1 t2
where t2.name is not null
group by t2.name
) m2
where m1.min_rowid = m2.min_rowid
order
by m1.id
Let's unpack that:
m1 is an inline view that gets us a list of distinct id values.
m2 is an inline view that gets us a list of distinct name values.
materialize the views m1 and m2
match the ROWID from m1 and m2 to match id with name
Someone else suggested the idea of an index merge. I had previously dismissed that idea, an optimizer plan to match 10s of millions of rowids without eliminating any of them.
With sufficiently low cardinality for id and name, and with the right optimizer plan:
select m1.id
, ( select m2.name
from table1 m2
where m2.id = m1.id
and rownum = 1
) as name
from (select t1.id
from table1 t1
where t1.id is not null
group by t1.id
) m1
order
by m1.id
Let's unpack that
m1 is an inline view that gets us a list of distinct id values.
materialize the view m1
for each row in m1, query table1 to get the name value from a single row (stopkey)
IMPORTANT NOTE
These statements are FUNDAMENTALLY different that the OP query. They are designed to return a DIFFERENT result set than the OP query. The happen to return the desired result set because of a quirky guarantee about the data. Don has told us that a name is determined by id. (Is the converse true? Is id determined by name? Do we have a STATED GUARANTEE, not necessarily enforced by the database, but a guarantee that we can take advantage of?) For any ID value, every row with that ID value will have the same NAME value. (And we are also guaranteed the converse is true, that for any NAME value, every row with that NAME value will have the same ID value?)
If so, maybe we can make use of that information. If ID and NAME appear in distinct pairs, we only need to find one particular row. The "pair" is going to have a matching ROWID, which conveniently happens to be available from each of the existing indexes. What if we get the minimum ROWID for each ID, and get the minimum ROWID for each NAME. Couldn't we then match the ID to the NAME based on the ROWID that contains the pair? I think it might work, given a low enough cardinality. (That is, if we're dealing with only hundreds of ROWIDs rather than 10s of millions.)
[/LATEST EDIT]
[EDIT]
The question is now updated with information concerning the table, it shows that the ID column and the NAME column both allow for NULL values. If Don can live without any NULLs returned in the result set, then adding the IS NOT NULL predicate on both of those columns may enable an index to be used. (NOTE: in an Oracle (B-Tree) index, NULL values do NOT appear in the index.)
[/EDIT]
ORIGINAL ANSWER:
create an appropriate index
create index table1_ix3 on table_1 (name,id) ... ;
Okay, that's not the answer to the question you asked, but it's the right answer to fixing the performance problem. (You specified no changes to the database, but in this case, changing the database is the right answer.)
Note that if you have an index defined on (name,id), then you (very likely) don't need an index on (name), sine the optimizer will consider the leading name column in the other index.
(UPDATE: as someone more astute than I pointed out, I hadn't even considered the possibility that the existing indexes were bitmap indexes and not B-tree indexes...)
Re-evaluate your need for the result set... do you need to return id, or would returning name be sufficient.
select distinct name from table1 order by name;
For a particular name, you could submit a second query to get the associated id, if and when you needed it...
select id from table1 where name = :b1 and rownum = 1;
If you you really need the specified result set, you can try some alternatives to see if the performance is any better. I don't hold out much hope for any of these:
select /*+ FIRST_ROWS */ DISTINCT id, name from table1 order by id;
or
select /*+ FIRST_ROWS */ id, name from table1 group by id, name order by name;
or
select /*+ INDEX(table1) */ id, min(name) from table1 group by id order by id;
UPDATE: as others have astutely pointed out, with this approach we're testing and comparing performance of alternative queries, which is a sort of hit or miss approach. (I don't agree that it's random, but I would agree that it's hit or miss.)
UPDATE: tom suggests the ALL_ROWS hint. I hadn't considered that, because I was really focused on getting a query plan using an INDEX. I suspect the OP query is doing a full table scan, and it's probably not the scan that's taking the time, it's the sort unique operation (<10g) or hash operation (10gR2+) that takes the time. (Absent timed statistics and event 10046 trace, I'm just guessing here.) But then again, maybe it is the scan, who knows, the high water mark on the table could be way out in a vast expanse of empty blocks.
It almost goes without saying that the statistics on the table should be up-to-date, and we should be using SQL*Plus AUTOTRACE, or at least EXPLAIN PLAN to look at the query plans.
But none of the suggested alternative queries really address the performance issue.
It's possible that hints will influence the optimizer to chooze a different plan, basically satisfying the ORDER BY from an index, but I'm not holding out much hope for that. (I don't think the FIRST_ROWS hint works with GROUP BY, the INDEX hint may.) I can see the potential for such an approach in a scenario where there's gobs of data blocks that are empty and sparsely populated, and ny accessing the data blocks via an index, it could actually be significantly fewer data blocks pulled into memory... but that scenario would be the exception rather than the norm.
UPDATE: As Rob van Wijk points out, making use of the Oracle trace facility is the most effective approach to identifying and resolving performance issues.
Without the output of an EXPLAIN PLAN or SQL*Plus AUTOTRACE output, I'm just guessing here.
I suspect the performance problem you have right now is that the table data blocks have to be referenced to get the specified result set.
There's no getting around it, the query can not be satisfied from just an index, since there isn't an index that contains both the NAME and ID columns, with either the ID or NAME column as the leading column. The other two "fast" OP queries can be satisfied from index without need reference the row (data blocks).
Even if the optimizer plan for the query was to use one of the indexes, it still has to retrieve the associated row from the data block, in order to get the value for the other column. And with no predicate (no WHERE clause), the optimizer is likely opting for a full table scan, and likely doing a sort operation (<10g). (Again, an EXPLAIN PLAN would show the optimizer plan, as would AUTOTRACE.)
I'm also assuming here (big assumption) that both columns are defined as NOT NULL.
You might also consider defining the table as an index organized table (IOT), especially if these are the only two columns in the table. (An IOT isn't a panacea, it comes with it's own set of performance issues.)
You can try re-writing the query (unless that's a database change that is also verboten) In our database environments, we consider a query to be as much a part of the database as the tables and indexes.)
Again, without a predicate, the optimizer will likely not use an index. There's a chance you could get the query plan to use one of the existing indexes to get the first rows returned quickly, by adding a hint, test a combination of:
select /*+ INDEX(table1) */ ...
select /*+ FIRST_ROWS */ ...
select /*+ ALL_ROWS */ ...
distinct id, name from table1;
distinct id, name from table1 order by id;
distinct id, name from table1 order by name;
id, name from table1 group by id, name order by id;
id, min(name) from table1 group by id order by id;
min(id), name from table1 group by name order by name;
With a hint, you may be able to influence the optimizer to use an index, and that may avoid the sort operation, but overall, it make take more time to return the entire result set.
(UPDATE: someone else pointed out that the optimizer might choose to merge two indexes based on ROWID. That's a possibility, but without a predicate to eliminate some rows, that's likely going to be a much more expensive approach (matching 10s of millions ROWIDs) from two indexes, especially when none of the rows are going to be excluded on the basis of the match.)
But all that theorizing doesn't amount to squat without some performance statistics.
Absent altering anything else in the database, the only other hope (I can think of) of you speeding up the query is to make sure the sort operation is tuned so that the (required) sort operation can be performed in memory, rather than on disk. But that's not really the right answer. The optimizer may not be doing a sort operation at all, it may be doing a hash operation (10gR2+) instead, in which case, that should be tuned. The sort operation is just a guess on my part, based on past experience with Oracle 7.3, 8, 8i, 9i.)
A serious DBA is going to have more issue with you futzing with the SORT_AREA_SIZE and/or HASH_AREA_SIZE parameters for your session(s) than he will in creating the correct indexes. (And those session parameters are "old school" for versions prior to 10g automatic memory management magic.)
Show your DBA the specification for the result set, let the DBA tune it.
A query cannot be tuned by looking at it, or randomly suggesting some equivalent queries, regardless how well meant they are.
You, we or the optimizer needs to know statistics about your data. And then you can measure with tools like EXPLAIN PLAN or SQLTrace/tkprof or even the simple autotrace tool from SQLPlus.
Can you show us the output of this:
set serveroutput off
select /*+ gather_plan_statistics */ distinct id,name from table1;
select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
And how does your entire table1 look like? Please show a describe output.
Regards,
Rob.
"The table is very large (10 of millions of rows)"
If you can't change the database (add index etc). Then your query will have no choice but to read the entire table. So firstly, determine how long that takes (ie time the SELECT ID,NAME FROM TABLE1). You won't get it any quicker than that.
The second step it has to do is the DISTINCT. In 10g+ that should use a HASH GROUP BY. Prior to that it is a SORT operation. The former is quicker. If your database is 9i, then you MAY get an improvement by copying the 10 million rows into a 10g database and doing it there.
Alternatively, allocate gobs of memory (google ALTER SESSION SET SORT_AREA_SIZE). That may harm other processes on the database, but then your DBAs aren't giving you much option.
You could try this:
select id, max(name) from table1 group by id
This uses the index on id for sure, but you have to try if it performs fast.
Without wishing to indulge in the practice of throwing stuff at the wall until something sticks, try this:
select id, name from table1 group by id, name
I have vague memories of a GROUP BY being inexplicably quicker than a DISTINCT.
Why do you need to even have "name" in the clause if the name is always the same for a given id? (nm...you want the name you aren't just checking for existence)
SELECT name, id FROM table WHERE id in (SELECT DISTINCT id FROM table)?
Don't know if that helps...
Is id unique? If so, you could drop DISTINCT from the query. If not - maybe it needs a new name? Yeah, I know, can't change the schema...
You could try something like
Select Distinct t1.id, t2.name
FROM (Select Distinct ID From Table) As T1
INNER JOIN table t2 on t1.id=t2.id
Select distinct t1.id, t2.name from table t1
inner Join table t2 on t1.id=t2.id
Not sure if this will work out slower or faster than the original as I'm not completely understanding how your table is set up. If each ID will always have the same name, and ID is unique, I don't really see the point of the distinct.
Really try to work something out with the DBAs. Really. Attempt to communicate the benefits and ease their fears of degraded performance.
Got a development environment/database to test this stuff?
How timely must the data be?
How about a copy of the table already grouped by id and name with proper indexing? A batch job could be configured to refresh your new table once a night.
But if that doesn't work out...
How about exporting all of the id and name pairs to an alternate database where you can group and index to your benefit and leave the DBAs with all of their smug rigidness?
This may perform better. It assumes that, as you said, the name is always the same for a given id.
WITH id_list AS (SELECT DISTINCT id FROM table1)
SELECT id_list.id, (SELECT name FROM table1 WHERE table1.id = id_list.id AND rownum = 1)
FROM id_list;
If for a given id the same name is always returned, you can run the following:
SELECT (
SELECT name
FROM table1
WHERE id = did
AND rownum = 1
)
FROM (
SELECT DISTINCT id AS did
FROM table1
WHERE id IS NOT NULL
)
Both queries will use the index on id.
If you still need the NULL values, run this:
SELECT (
SELECT name
FROM table1
WHERE id = did
AND rownum = 1
)
FROM (
SELECT DISTINCT id AS did
FROM table1
WHERE id IS NOT NULL
)
UNION ALL
SELECT NULL, name
FROM table1
WHERE id IS NULL
AND rownum = 1
This will be less efficient, since the second query doesn't use indexes, but it will stop on the first NULL it encounters: if it's close to the beginning of the tables, then you're lucky.
See the entry in my blog for performance details:
Distinct pairs
First, I know that the sql statement to update table_a using values from table_b is in the form of:
Oracle:
UPDATE table_a
SET (col1, col2) = (SELECT cola, colb
FROM table_b
WHERE table_a.key = table_b.key)
WHERE EXISTS (SELECT *
FROM table_b
WHERE table_a.key = table_b.key)
MySQL:
UPDATE table_a
INNER JOIN table_b ON table_a.key = table_b.key
SET table_a.col1 = table_b.cola,
table_a.col2 = table_b.colb
What I understand is the database engine will go through records in table_a and update them with values from matching records in table_b.
So, if I have 10 millions records in table_a and only 10 records in table_b:
Does that mean that the engine will do 10 millions iterations through table_a just to update 10 records? Are Oracle/MySQL/etc smart enough to do only 10 iterations through table_b?
Is there a way to force the engine to actually iterate through records in table_b instead of table_a to do the update? Is there an alternative syntax for the sql statement?
Assume that table_a.key and table_b.key are indexed.
Either engine should be smart enough to optimize the query based on the fact that there are only ten rows in table b. How the engine determines what to do is based factors like indexes and statistics.
If the "key" column is the primary key and/or is indexed, the engine will have to do very little work to run this query. It will basically already sort of "know" where the matching rows are, and look them up very quickly. It won't have to "iterate" at all.
If there is no index on the key column, the engine will have to to a "table scan" (roughly the equivalent of "iterate") to find the right values and match them up. This means it will have to scan through 10 million rows.
Do a little reading on what's called an Execution Plan. This is basically an explanation of what work the engine had to do in order to run your query (some databases show it in text only, some have the option of seeing it graphically). Learning how to interpret an Execution Plan will give you great insight into adding indexes to your tables and optimizing your queries.
Look these up if they don't work (it's been a while), but it's something like:
In MySQL, put the work "EXPLAIN" in front of your SELECT statement
In Oracle, run "SET AUTOTRACE ON" before you run your SELECT statement
I think the first (Oracle) query would be better written with a JOIN instead of a WHERE EXISTS. The engine may be smart enough to optimize it properly either way. Once you get the hang of interpreting an execution plan, you can run it both ways and see for yourself. :)
Okay I know answering own question is usually frowned upon but I already accepted another answer and won't unaccept it so meh here it is ..
I've discovered a much better alternative that I'd like to share it with anyone who encounters the same scenario: MERGE statement.
Apparently, newer Oracle versions introduced this MERGE statement which simply blows! Not only that the performance is so much better in most cases, the syntax is so simple and so make sense that I feel stupid for using the UPDATE statement! Here comes ..
MERGE INTO table_a
USING table_b
ON (table_a.key = table_b.key)
WHEN MATCHED THEN UPDATE SET
table_a.col1 = table_b.cola,
table_a.col2 = table_b.colb;
And what more is that I can also extend the statement to include INSERT action when table_a does not have matching records for some records in table_b:
MERGE INTO table_a
USING table_b
ON (table_a.key = table_b.key)
WHEN MATCHED THEN UPDATE SET
table_a.col1 = table_b.cola,
table_a.col2 = table_b.colb
WHEN NOT MATCHED THEN INSERT
(key, col1, col2)
VALUES (table_b.key, table_b.cola, table_b.colb);
This new statement type made my day the day I discovered it :)