I have a query (below). The explain plan shows high CPU utilization which has also caused downtime in our lab. So it possible to tube this query further? What should i do to tune it?
FYI, mtr_main_a, mtr_main_b, mtr_hist contain huge no of records may be 10 millions or more.
SELECT to_char(MAX(mdt), 'MM-DD-RRRR HH24:MI:SS')
FROM (
SELECT MAX(mod_date - 2 / 86400) mdt
FROM mtr_main_a
UNION
SELECT MAX(mod_date - 2 / 86400) mdt
FROM mtr_main_b
UNION
SELECT MAX(mod_date - 2 / 86400) mdt
FROM mtr_hist#batch_hist
)
/
Explain plan is as below
Execution Plan
----------------------------------------------------------
Plan hash value: 1573811822
-------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Inst |IN-OUT|
-------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 9 | | 79803 (1)| 00:18:38 | | |
| 1 | SORT AGGREGATE | | 1 | 9 | | | | | |
| 2 | VIEW | | 2 | 18 | | 79803 (1)| 00:18:38 | | |
| 3 | SORT UNIQUE | | 2 | 17 | 77M| 79803 (2)| 00:18:38 | | |
| 4 | UNION-ALL | | | | | | | | |
| 5 | SORT AGGREGATE | | 1 | 8 | | 79459 (1)| 00:18:33 | | |
| 6 | TABLE ACCESS FULL| MTR_MAIN_A | 5058K| 38M| | 67735 (1)| 00:15:49 | | |
| 7 | SORT AGGREGATE | | 1 | 9 | | 344 (1)| 00:00:05 | | |
| 8 | TABLE ACCESS FULL| MTR_MAIN_B | 1 | 9 | | 343 (1)| 00:00:05 | | |
| 9 | REMOTE | | | | | | | HISTB | R->S |
-------------------------------------------------------------------------------------------------------------
Remote SQL Information (identified by operation id):
----------------------------------------------------
9 - EXPLAIN PLAN SET STATEMENT_ID='PLUS10294704' INTO PLAN_TABLE#! FOR SELECT
MAX("A1"."MOD_DATE"-.00002314814814814814814814814814814814814815) FROM "MTR_HIST" "A1" (accessing
'HISTB' )
Thanks and Regards,
Chandra
You should be able to greatly improve the performance by putting an index on the mod_date columns and changing your query in a way that does the subtraction at the end, after you determined the maximum date:
SELECT to_char(MAX(mdt) - 2 / 86400, 'MM-DD-RRRR HH24:MI:SS')
FROM (
SELECT MAX(mod_date) mdt
FROM mtr_main_a
UNION
SELECT MAX(mod_date) mdt
FROM mtr_main_b
UNION
SELECT MAX(mod_date) mdt
FROM mtr_hist#batch_hist
)
This should get rid of the full table scans.
If you have indexes on the columns, how does this version work:
SELECT to_char((case when a.mdt > b.mdt and a.mdt > c.mdt then a.mdt
when b.mdt > c.mdt then b.mdt
else c.mdt
end) - 2 / 86400, 'MM-DD-RRRR HH24:MI:SS')
FROM (SELECT MAX(mod_date) mdt
FROM mtr_main_a
) a cross join
(SELECT MAX(mod_date) mdt
FROM mtr_main_b
) b cross join
(SELECT MAX(mod_date) mdt
FROM mtr_hist#batch_hist
) c
This is just a suggestion, if the union version doesn't work faster.
Related
I have the below query, but when I execute it runs forever.
WITH aux AS (
SELECT
contract,
contract_account,
business_partner,
payment_plan,
installation,
contract_status
FROM
reta.mv_integrated_md a
WHERE
contract_status IN (
'LIVE',
'FINAL'
)
), aux1 AS (
SELECT
a.*,
CASE
WHEN EXISTS (
SELECT
NULL
FROM
aux b
WHERE
b.business_partner = a.business_partner
AND b.installation = a.installation
AND b.payment_plan = 'BMW'
) THEN
'X'
END h
FROM
aux a
)
SELECT
*
FROM
aux1;
My execution plan shows a huge cost which I cannot locate. How could I optimize this query? I have tried some hints but none of them have worked :(
Plan hash value: 1662974027
----------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
----------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 19M| 2000M| 825G (1)|999:59:59 | | |
|* 1 | VIEW | | 19M| 990M| 41331 (1)| 00:00:02 | | |
| 2 | TABLE ACCESS STORAGE FULL | SYS_TEMP_0FDA49C92_9A7BE8DE | 19M| 1066M| 41331 (1)| 00:00:02 | | |
| 3 | TEMP TABLE TRANSFORMATION | | | | | | | |
| 4 | LOAD AS SELECT | SYS_TEMP_0FDA49C92_9A7BE8DE | | | | | | |
| 5 | PARTITION RANGE SINGLE | | 18M| 974M| 759K (1)| 00:00:30 | 1 | 1 |
|* 6 | TABLE ACCESS STORAGE FULL| MV_INTEGRATED_MD | 18M| 974M| 759K (1)| 00:00:30 | 1 | 1 |
| 7 | VIEW | | 19M| 2000M| 41331 (1)| 00:00:02 | | |
| 8 | TABLE ACCESS STORAGE FULL | SYS_TEMP_0FDA49C92_9A7BE8DE | 19M| 1066M| 41331 (1)| 00:00:02 | | |
----------------------------------------------------------------------------------------------------------------------------
Kindly let me know if any additional information needed.
Use window functions:
SELECT r.contract, r.contract_account, r.business_partner,
r.payment_plan, r.installation, r.contract_status,
MAX(CASE WHEN r.payment_plan = 'BMW' THEN 'X' END) OVER (PARTITION BY business_partner, installation) as h
FROM reta.mv_integrated_md#rbip r
WHERE r.contract_status IN ('LIVE', 'FINAL');
Not only is the query much simpler to write and read, but it should perform much better too.
Highest cost is due to FTS(Full table scan) on table/MV MV_INTEGRATED_MD.
Try to create index on contract_status and check if it reduces the cost and also, what is size of this mv/table in terms of block and it is 10 percent or more than total buffer cache size ?
TABLE ACCESS STORAGE FULL| MV_INTEGRATED_MD | 18M| 974M| 759K (1)| 00:00:30 | 1 | 1
If you run your query with the /*+ gather_plan_statistics */ hint (I'm simulating it with a 1000 row table) you imediately see the problem :
select * from table(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));
-------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads |
-------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1000 |00:00:00.01 | 9 | 5 |
|* 1 | VIEW | | 1000 | 1000 | 1000 |00:00:00.09 | 0 | 0 |
| 2 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6737_1A17DE13 | 1000 | 1000 | 500K|00:00:00.08 | 0 | 0 |
| 3 | TEMP TABLE TRANSFORMATION | | 1 | | 1000 |00:00:00.01 | 9 | 5 |
| 4 | LOAD AS SELECT (CURSOR DURATION MEMORY)| SYS_TEMP_0FD9D6737_1A17DE13 | 1 | | 0 |00:00:00.01 | 8 | 5 |
|* 5 | TABLE ACCESS FULL | MV_INTEGRATED_MD | 1 | 1000 | 1000 |00:00:00.01 | 7 | 5 |
| 6 | VIEW | | 1 | 1000 | 1000 |00:00:00.01 | 0 | 0 |
| 7 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6737_1A17DE13 | 1 | 1000 | 1000 |00:00:00.01 | 0 | 0 |
-------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(("B"."BUSINESS_PARTNER"=:B1 AND "B"."INSTALLATION"=:B2 AND "B"."PAYMENT_PLAN"='BMW'))
5 - filter("CONTRACT_STATUS"='LIVE')
It is in the line 2 where a full scan is activated in a loop for each line of the main table (see starts = 1000)
Typically you want to resolve the EXISTS with a semi join to preserve good performance, but here it seems that Oracle can not rewrite it.
So you'll need to rewrite the query yourself.
Despite the excelent proposal of #GordonLinoff (that I'll start with) you may try to use an outer join as follows
with bmw as (
select distinct business_partner, installation
from mv_integrated_md
where payment_plan = 'BMW')
SELECT
a.contract,
a.contract_account,
a.business_partner,
a.payment_plan,
a.installation,
a.contract_status,
case when b.business_partner is not null then 'X' end as h
FROM mv_integrated_md a
left outer join bmw b
on b.business_partner = a.business_partner and
b.installation = a.installation
WHERE a.contract_status IN ( 'LIVE', 'FINAL')
This will lead to two fulls scans, one deduplication and outer join.
I'm using Oracle11g and i would compare two tables finding records that match between them.
Example:
Table 1 Table 2
George Micheal
Michael Paul
The record "Micheal" and "Michael" match between them, so they are good record.
To see if two records match, i use the Oracle function utl_match.edit_distance_similarity.
I tried with the code below, but i have a performance problem (it is too slow):
SELECT *
FROM table1
JOIN table2
ON utl_match.edit_distance_similarity(table1.name, table2.name) > 75;
Is there a better solution?
Thank you
This is a hard problem. In general, it is going to result in nested loop joins and slowness. It might be possible to use SOUNDEX() to get "closish" matches and then the character distance function for final filtering. This may not work for your problem, but it might.
Although I am not a big fan of the function, you might find that soundex() works for your purposes (see here).
The idea would be to add an index on this value:
create index idx_table1_soundexname on table1(soundex(name));
create index idx_table2_soundexname on table2(soundex(name));
Then you would query this as:
SELECT *
FROM table1 t1 JOIN
table2 t2
ON soundex(t1.name) = soundex(t2.name)
WHERE utl_match.edit_distance_similarity(t1.name, t2.name) > 75;
The idea is that Oracle will use the indexes to get names that are "close" and then the edit distance to get the better matches. This may not work for your problem. It is just an idea that might work.
In case you have a lot of redundancy with respect to name values in your tables table1 and table2, this could be a solution
-- Test data set
select count(*) from table1;
--> 10.000
select count(*) from table2;
--> 10.000
select count(distinct(name)) from table1;
--> ~ 2500
select count(distinct(name)) from table2;
--> ~ 2500
/* a) Join with function compare */
select table1.name, table2.name
from table1, table2
where utl_match.edit_distance_similarity(table1.name, table2.name) > 35
/*
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | Time |
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 5000000 | 270000000 | 37364 | 00:09:21 |
| 1 | NESTED LOOPS | | 5000000 | 270000000 | 37364 | 00:09:21 |
| 2 | TABLE ACCESS FULL | TABLE1 | 10000 | 270000 | 5 | 00:00:01 |
| * 3 | TABLE ACCESS FULL | TABLE2 | 500 | 13500 | 4 | 00:00:01 |
--------------------------------------------------------------------------------
Predicate Information (identified by operation id):
------------------------------------------
* 3 - filter("UTL_MATCH"."EDIT_DISTANCE_SIMILARITY"("TABLE1"."NAME","TABLE2"."NAME")>35)
Note
-----
- dynamic sampling used for this statement
*/
/* b) Join with function, only distinct values */
-- A Set of all existing names (in table1 and table2)
with names as
(select name from table1 union select name from table2),
-- Compare only once because utl_match.edit_distance_similarity(name1, name2) = utl_match.edit_distance_similarity(name2, name1)
table_cmp(name1, name2) as
(select n1.name, n2.name
from names n1
join names n2
on n1.name <= n2.name
and utl_match.edit_distance_similarity(n1.name, n2.name) > 35)
select t1.*, t2.*
from table_cmp c
join table1 t1
on t1.name = c.name1
join table2 t2
on t2.name = c.name2
union all
select t1.*, t2.*
from table_cmp c
join table1 t1
on t1.name = c.name2
join table2 t2
on t2.name = c.name1;
/*
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | Time |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 30469950 | 3290754600 | 2495 | 00:00:38 |
| 1 | TEMP TABLE TRANSFORMATION | | | | | |
| 2 | LOAD AS SELECT | SYS_TEMP_0FD9D663E_B39FC2B6 | | | | |
| 3 | SORT UNIQUE | | 20000 | 540000 | 12 | 00:00:01 |
| 4 | UNION-ALL | | | | | |
| 5 | TABLE ACCESS FULL | TABLE1 | 10000 | 270000 | 5 | 00:00:01 |
| 6 | TABLE ACCESS FULL | TABLE2 | 10000 | 270000 | 5 | 00:00:01 |
| 7 | LOAD AS SELECT | SYS_TEMP_0FD9D663F_B39FC2B6 | | | | |
| 8 | MERGE JOIN | | 1000000 | 54000000 | 62 | 00:00:01 |
| 9 | SORT JOIN | | 20000 | 540000 | 3 | 00:00:01 |
| 10 | VIEW | | 20000 | 540000 | 2 | 00:00:01 |
| 11 | TABLE ACCESS FULL | SYS_TEMP_0FD9D663E_B39FC2B6 | 20000 | 540000 | 2 | 00:00:01 |
| * 12 | FILTER | | | | | |
| * 13 | SORT JOIN | | 20000 | 540000 | 3 | 00:00:01 |
| 14 | VIEW | | 20000 | 540000 | 2 | 00:00:01 |
| 15 | TABLE ACCESS FULL | SYS_TEMP_0FD9D663E_B39FC2B6 | 20000 | 540000 | 2 | 00:00:01 |
| 16 | UNION-ALL | | | | | |
| * 17 | HASH JOIN | | 15234975 | 1645377300 | 1248 | 00:00:19 |
| 18 | TABLE ACCESS FULL | TABLE2 | 10000 | 270000 | 5 | 00:00:01 |
| * 19 | HASH JOIN | | 3903201 | 316159281 | 1200 | 00:00:18 |
| 20 | TABLE ACCESS FULL | TABLE1 | 10000 | 270000 | 5 | 00:00:01 |
| 21 | VIEW | | 1000000 | 54000000 | 1183 | 00:00:18 |
| 22 | TABLE ACCESS FULL | SYS_TEMP_0FD9D663F_B39FC2B6 | 1000000 | 54000000 | 1183 | 00:00:18 |
| * 23 | HASH JOIN | | 15234975 | 1645377300 | 1248 | 00:00:19 |
| 24 | TABLE ACCESS FULL | TABLE2 | 10000 | 270000 | 5 | 00:00:01 |
| * 25 | HASH JOIN | | 3903201 | 316159281 | 1200 | 00:00:18 |
| 26 | TABLE ACCESS FULL | TABLE1 | 10000 | 270000 | 5 | 00:00:01 |
| 27 | VIEW | | 1000000 | 54000000 | 1183 | 00:00:18 |
| 28 | TABLE ACCESS FULL | SYS_TEMP_0FD9D663F_B39FC2B6 | 1000000 | 54000000 | 1183 | 00:00:18 |
--------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
------------------------------------------
* 12 - filter("UTL_MATCH"."EDIT_DISTANCE_SIMILARITY"("N1"."NAME","N2"."NAME")>35)
* 13 - access("N1"."NAME"<="N2"."NAME")
* 13 - filter("N1"."NAME"<="N2"."NAME")
* 17 - access("T2"."NAME"="C"."NAME2")
* 19 - access("T1"."NAME"="C"."NAME1")
* 23 - access("T2"."NAME"="C"."NAME1")
* 25 - access("T1"."NAME"="C"."NAME2")
Note
-----
- dynamic sampling used for this statement
*/
I have a set of complex optimized selects that suffer from physical reads. Without them they would be even faster!
These physical reads occur due to the WITH clause, one physical_read_request per WITH sub-query. They seem totally unnecessary to me, I'd prefer Oracle keeping the sub-query results in memory instead of writing them down to disk and reading them again.
I'm looking for a way how to get rid of these phys reads.
A simple sample having the same problems is this:
Edit: Example replaced with simpler one that does not use dictionary views.
alter session set STATISTICS_LEVEL=ALL;
create table T as
select level NUM from dual
connect by level <= 1000;
with /*a2*/ TT as (
select NUM from T
where NUM between 100 and 110
)
select * from TT
union all
select * from TT
;
SELECT * FROM TABLE(dbms_xplan.display_cursor(
(select sql_id from v$sql
where sql_fulltext like 'with /*a2*/ TT%'
and sql_fulltext not like '%v$sql%'
and sql_fulltext not like 'explain%'),
NULL, format=>'allstats last'));
and the corresponding execution plan is
SQL_ID bpqnhfdmxnqvp, child number 0
-------------------------------------
with /*a2*/ TT as ( select NUM from T where NUM between 100 and
110 ) select * from TT union all select * from TT
Plan hash value: 4255080040
---------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | Writes | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 22 |00:00:00.01 | 20 | 1 | 1 | | | |
| 1 | TEMP TABLE TRANSFORMATION | | 1 | | 22 |00:00:00.01 | 20 | 1 | 1 | | | |
| 2 | LOAD AS SELECT | | 1 | | 0 |00:00:00.01 | 8 | 0 | 1 | 266K| 266K| 266K (0)|
|* 3 | TABLE ACCESS FULL | T | 1 | 11 | 11 |00:00:00.01 | 4 | 0 | 0 | | | |
| 4 | UNION-ALL | | 1 | | 22 |00:00:00.01 | 9 | 1 | 0 | | | |
| 5 | VIEW | | 1 | 11 | 11 |00:00:00.01 | 6 | 1 | 0 | | | |
| 6 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6646_63A776 | 1 | 11 | 11 |00:00:00.01 | 6 | 1 | 0 | | | |
| 7 | VIEW | | 1 | 11 | 11 |00:00:00.01 | 3 | 0 | 0 | | | |
| 8 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6646_63A776 | 1 | 11 | 11 |00:00:00.01 | 3 | 0 | 0 | | | |
---------------------------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter(("NUM">=100 AND "NUM"<=110))
Note
-----
- dynamic sampling used for this statement (level=2)
See the (phys) Write upon each WITH view creation, and the (phys) Read upon the first view usage. I also tried the RESULT_CACHE hint (which is not reflected in this sample select, but was reflected in my original queries), but it doesn't remove the disk accesses either (which is understandable).
How can I get rid of the phys writes/reads?
I'm having trouble optimizing an Oracle query after an upgrade to Oracle 11g and this problem is starting to drive me a little mad.
Note, this question has been now fully edited because I have more information after creating a simple test case. The original question is available here: https://stackoverflow.com/revisions/12304320/1.
This issue is that when joining two tables, one of which has a between condition on a date column, if the query joins to a remote table, bind peeking doesn't happen.
Here is a test case to help reproduce the problem. First set up two source tables. The first is a list of dates, being the first of the month, going back thirty years
create table mike_temp_etl_control
as
select
add_months(trunc(sysdate, 'MM'), 1-row_count) as reporting_date
from (
select level as row_count
from dual
connect by level < 360
);
Then some data sourced from dba_objects:
create table mike_temp_dba_objects as
select owner, object_name, subobject_name, object_id, created
from dba_objects
union all
select owner, object_name, subobject_name, object_id, created
from dba_objects;
Then create an empty table to run the data in to:
create table mike_temp_1
as
select
a.OWNER,
a.OBJECT_NAME,
a.SUBOBJECT_NAME,
a.OBJECT_ID,
a.CREATED,
b.REPORTING_DATE
from
mike_temp_dba_objects a
join mike_temp_etl_control b on (
b.reporting_date between add_months(a.created, -24) and a.created)
where 1=2;
Then run the code. You may need to create a larger version mike_temp_dba_objects to slow the query down (or use some other method to get the execution plan). While the query is running, I get an execution plan from the session by running select *
from table(dbms_xplan.display_cursor(sql_id => 'xxxxxxxxxxx')) from a different session.
declare
pv_report_start_date date := date '2002-01-01';
v_report_end_date date := date '2012-07-01';
begin
INSERT /*+ APPEND */
INTO mike_temp_5
select
a.OWNER,
a.OBJECT_NAME,
a.SUBOBJECT_NAME,
a.OBJECT_ID,
a.CREATED,
b.REPORTING_DATE
from
mike_temp_dba_objects a
join mike_temp_etl_control b on (
b.reporting_date between add_months(a.created, -24) and a.created)
cross join dual#emirrl -- This line causes problems...
where
b.reporting_date between add_months(pv_report_start_date, -12) and v_report_end_date;
rollback;
end;
By having a remote table in the query, the cardinality estimate for the mike_temp_etl_control table is completely wrong and bind peeking doesn't seem to be happening.
The execution plan for the query above is shown below:
---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
---------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 373 (100)|
| 1 | LOAD AS SELECT | | | | |
|* 2 | FILTER | | | | |
| 3 | MERGE JOIN | | 5 | 655 | 373 (21)|
| 4 | SORT JOIN | | 1096 | 130K| 370 (20)|
| 5 | MERGE JOIN CARTESIAN| | 1096 | 130K| 369 (20)|
| 6 | REMOTE | DUAL | 1 | | 2 (0)|
| 7 | BUFFER SORT | | 1096 | 130K| 367 (20)|
|* 8 | TABLE ACCESS FULL | MIKE_TEMP_DBA_OBJECTS | 1096 | 130K| 367 (20)|
|* 9 | FILTER | | | | |
|* 10 | SORT JOIN | | 2 | 18 | 3 (34)|
|* 11 | TABLE ACCESS FULL | MIKE_TEMP_ETL_CONTROL | 2 | 18 | 2 (0)|
---------------------------------------------------------------------------------------
If I then replace the remote dual with the local version I get the correct cardinality (139 instead of 2):
-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
-------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 10682 (100)|
| 1 | LOAD AS SELECT | | | | |
|* 2 | FILTER | | | | |
| 3 | MERGE JOIN | | 152K| 19M| 10682 (3)|
| 4 | SORT JOIN | | 438K| 51M| 10632 (2)|
| 5 | NESTED LOOPS | | 438K| 51M| 369 (20)|
| 6 | FAST DUAL | | 1 | | 2 (0)|
|* 7 | TABLE ACCESS FULL| MIKE_TEMP_DBA_OBJECTS | 438K| 51M| 367 (20)|
|* 8 | FILTER | | | | |
|* 9 | SORT JOIN | | 139 | 1251 | 3 (34)|
|* 10 | TABLE ACCESS FULL| MIKE_TEMP_ETL_CONTROL | 139 | 1251 | 2 (0)|
-------------------------------------------------------------------------------------
So, I guess the question is how can I get the correct cardinality to be estimated? Is this an Oracle bug or is this the expected behaviour?
I think you should mess about dynamic sampling. It works in 11g differently so may it is the reason of your troubles.
I'm attempting to build an infrastructure for quickly running regressions on demand, pulling apache requests from a database that contains all historic activity on our webservers. To improve coverage by making sure that we still regress requests from our smaller clients, I'd like to ensure a distribution of requests by retrieving at most n (for the sake of this question, say 10) requests for each client.
I found a number of similar questions answered here, the closest of which seemed to be SQL query to return top N rows per ID across a range of IDs, but the answers were for the most part performance-agnostic solutions
that I had already tried. For instance, the row_number() analytic function gets us exactly the data we're looking for:
SELECT
*
FROM
(
SELECT
dailylogdata.*,
row_number() over (partition by dailylogdata.contextid order by occurrencedate) rn
FROM
dailylogdata
WHERE
shorturl in (?)
)
WHERE
rn <= 10;
However, given that this table contains millions of entries for a given day and this approach necessitates reading all rows from the index that match our selection criteria in order to apply the row_number analytic function, performance is terrible. We end up selecting nearly a million rows, only to throw out the vast majority of them because their row_number exceeded 10. Stats from executing the above query:
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | Writes | OMem | 1Mem | Used-Mem | Used-Tmp||
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|| 0 | SELECT STATEMENT | | 1 | | 12222 |00:09:08.94 | 895K| 584K| 301 | | | | ||
||* 1 | VIEW | | 1 | 4427K| 12222 |00:09:08.94 | 895K| 584K| 301 | | | | ||
||* 2 | WINDOW SORT PUSHED RANK | | 1 | 4427K| 13536 |00:09:08.94 | 895K| 584K| 301 | 2709K| 743K| 97M (1)| 4096 ||
|| 3 | PARTITION RANGE SINGLE | | 1 | 4427K| 932K|00:22:27.90 | 895K| 584K| 0 | | | | ||
|| 4 | TABLE ACCESS BY LOCAL INDEX ROWID| DAILYLOGDATA | 1 | 4427K| 932K|00:22:27.61 | 895K| 584K| 0 | | | | ||
||* 5 | INDEX RANGE SCAN | DAILYLOGDATA_URLCONTEXT | 1 | 17345 | 932K|00:00:00.75 | 1448 | 0 | 0 | | | | ||
|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| |
|Predicate Information (identified by operation id): |
|--------------------------------------------------- |
| |
| 1 - filter("RN"<=:SYS_B_2) |
| 2 - filter(ROW_NUMBER() OVER ( PARTITION BY "DAILYLOGDATA"."CONTEXTID" ORDER BY "OCCURRENCEDATE")<=:SYS_B_2) |
| 5 - access("SHORTURL"=:P1) |
| |
+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
However, if instead we only query for the first 10 results for a specific contextid, we can execute this dramatically faster:
SELECT
*
FROM
(
SELECT
dailylogdata.*
FROM
dailylogdata
WHERE
shorturl in (?)
and contextid = ?
)
WHERE
rownum <= 10;
Stats from running this query:
|-------------------------------------------------------------------------------------------------------------------------|
|| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers ||
|-------------------------------------------------------------------------------------------------------------------------|
|| 0 | SELECT STATEMENT | | 1 | | 10 |00:00:00.01 | 14 ||
||* 1 | COUNT STOPKEY | | 1 | | 10 |00:00:00.01 | 14 ||
|| 2 | PARTITION RANGE SINGLE | | 1 | 10 | 10 |00:00:00.01 | 14 ||
|| 3 | TABLE ACCESS BY LOCAL INDEX ROWID| DAILYLOGDATA | 1 | 10 | 10 |00:00:00.01 | 14 ||
||* 4 | INDEX RANGE SCAN | DAILYLOGDATA_URLCONTEXT | 1 | 1 | 10 |00:00:00.01 | 5 ||
|-------------------------------------------------------------------------------------------------------------------------|
| |
|Predicate Information (identified by operation id): |
|--------------------------------------------------- |
| |
| 1 - filter(ROWNUM<=10) |
| 4 - access("SHORTURL"=:P1 AND "CONTEXTID"=TO_NUMBER(:P2)) |
| |
+-------------------------------------------------------------------------------------------------------------------------+
In this instance, Oracle is smart enough to stop retrieving data after getting 10 results. I could gather a complete set of contextids and programatically generate a query consisting of one instance of this query for each contextid and union all the whole mess together, but given the sheer number of contextids, we might run into an internal Oracle limitation, and even if not, this approach reeks of kludge.
Does anyone know of an approach that maintains the simplicity of the first query, while retaining performance commensurate with the second query? Also note that I don't actually care about retrieving a stable set of rows; so long as they satisfy my criteria, they're fine for the purposes of a regression.
Edit: Adam Musch's suggestion did the trick. I'm appending performance results with his changes up here since I can't fit them in a comment response to his answer. I'm also using a larger data set for testing this time, here are the (cached) stats from my original row_number approach for comparison:
|-------------------------------------------------------------------------------------------------------------------------------------------------|
|| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem ||
|-------------------------------------------------------------------------------------------------------------------------------------------------|
|| 0 | SELECT STATEMENT | | 1 | | 12624 |00:00:22.34 | 1186K| 931K| | | ||
||* 1 | VIEW | | 1 | 1163K| 12624 |00:00:22.34 | 1186K| 931K| | | ||
||* 2 | WINDOW NOSORT | | 1 | 1163K| 1213K|00:00:21.82 | 1186K| 931K| 3036M| 17M| ||
|| 3 | TABLE ACCESS BY INDEX ROWID| TWTEST | 1 | 1163K| 1213K|00:00:20.41 | 1186K| 931K| | | ||
||* 4 | INDEX RANGE SCAN | TWTEST_URLCONTEXT | 1 | 1163K| 1213K|00:00:00.81 | 8568 | 0 | | | ||
|-------------------------------------------------------------------------------------------------------------------------------------------------|
| |
|Predicate Information (identified by operation id): |
|--------------------------------------------------- |
| |
| 1 - filter("RN"<=10) |
| 2 - filter(ROW_NUMBER() OVER ( PARTITION BY "CONTEXTID" ORDER BY NULL )<=10) |
| 4 - access("SHORTURL"=:P1) |
+-------------------------------------------------------------------------------------------------------------------------------------------------+
I've taken the liberty of boiling down Adam's suggestion a bit; here's the modified query...
select
*
from
twtest
where
rowid in (
select
rowid
from (
select
rowid,
shorturl,
row_number() over (partition by shorturl, contextid
order by null) rn
from
twtest
)
where rn <= 10
and shorturl in (?)
);
...and stats from its (cached) evaluation:
|--------------------------------------------------------------------------------------------------------------------------------------|
|| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem ||
|--------------------------------------------------------------------------------------------------------------------------------------|
|| 0 | SELECT STATEMENT | | 1 | | 12624 |00:00:01.33 | 19391 | | | ||
|| 1 | NESTED LOOPS | | 1 | 1 | 12624 |00:00:01.33 | 19391 | | | ||
|| 2 | VIEW | VW_NSO_1 | 1 | 1163K| 12624 |00:00:01.27 | 6770 | | | ||
|| 3 | HASH UNIQUE | | 1 | 1 | 12624 |00:00:01.27 | 6770 | 1377K| 1377K| 5065K (0)||
||* 4 | VIEW | | 1 | 1163K| 12624 |00:00:01.25 | 6770 | | | ||
||* 5 | WINDOW NOSORT | | 1 | 1163K| 1213K|00:00:01.09 | 6770 | 283M| 5598K| ||
||* 6 | INDEX RANGE SCAN | TWTEST_URLCONTEXT | 1 | 1163K| 1213K|00:00:00.40 | 6770 | | | ||
|| 7 | TABLE ACCESS BY USER ROWID| TWTEST | 12624 | 1 | 12624 |00:00:00.04 | 12621 | | | ||
|--------------------------------------------------------------------------------------------------------------------------------------|
| |
|Predicate Information (identified by operation id): |
|--------------------------------------------------- |
| |
| 4 - filter("RN"<=10) |
| 5 - filter(ROW_NUMBER() OVER ( PARTITION BY "SHORTURL","CONTEXTID" ORDER BY NULL NULL )<=10) |
| 6 - access("SHORTURL"=:P1) |
| |
|Note |
|----- |
| - dynamic sampling used for this statement (level=2) |
| |
+--------------------------------------------------------------------------------------------------------------------------------------+
As advertised, we're only accessing the dailylogdata table for the fully-filtered rows. I'm concerned that it appears to still be doing a full scan of the urlcontext index based on the number of rows it claims to be selecting (1213K), but given that it's using only using 6770 buffers (this number remains constant even if I increase the number of context-specific results) this may be misleading.
This is kind of a janky solution, but it seems to do what you want: shortcut the index scan as soon as possible, and not read data until it's been qualified both by filtering conditioning and top-n query criteria.
Note that it was tested with a shorturl = condition, not a shorturl IN condition.
with rowid_list as
(select rowid
from (select *
from (select rowid,
row_number() over (partition by shorturl, contextid
order by null) rn
from dailylogdata
)
where rn <= 10
)
where shorturl = ?
)
select *
from dailylogdata
where rowid in (select rowid from rowid_list)
The with clause grabs the first 10 rowids filtering a WINDOW NOSORT for each unique combination of shorturl and contextid that meets your criteria. Then it loops over that set of rowids, fetching each by rowid.
----------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 286 | 1536 (1)| 00:00:19 |
| 1 | NESTED LOOPS | | 1 | 286 | 1536 (1)| 00:00:19 |
| 2 | VIEW | VW_NSO_1 | 136K| 1596K| 910 (1)| 00:00:11 |
| 3 | HASH UNIQUE | | 1 | 3326K| | |
|* 4 | VIEW | | 136K| 3326K| 910 (1)| 00:00:11 |
|* 5 | WINDOW NOSORT | | 136K| 2794K| 910 (1)| 00:00:11 |
|* 6 | INDEX RANGE SCAN | TABLE_REDACTED_INDEX | 136K| 2794K| 910 (1)| 00:00:11 |
| 7 | TABLE ACCESS BY USER ROWID| TABLE_REDACTED | 1 | 274 | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
4 - filter("RN"<=10)
5 - filter(ROW_NUMBER() OVER ( PARTITION BY "CLIENT_ID","SCE_ID" ORDER BY NULL NULL
)<=10)
6 - access("TABLE_REDACTED"."SHORTURL"=:b1)
It seems to be the sort taking all the time. Is occurrenceDate your clustered index, and if not, is it much quicker if you change to order by your clustered index? I.e. if it's clustered by a sequential id, then order by that.
Last time I simply cached last most interesting rows in a smaller table. with my data distribution it was cheaper to update the cache table on every insert rather than query the bulk table.
I think you should also check other ways/queries to achieve the same result set.
Self-JOIN / GROUP BY
SELECT
d.*
, COUNT(*) AS rn
FROM
dailylogdata AS d
LEFT OUTER JOIN
dailylogdata AS d2
ON d.contextid = d2.contextid
AND d.occurrencedate >= d2.occurrencedate)
AND d2.shorturl IN (?)
WHERE
d.shorturl IN (?)
GROUP BY
d.*
HAVING
COUNT(*) <= 10
And another one which I have no idea if it works correctly:
SELECT
d.*
, COUNT(*) AS rn
FROM
( SELECT DISTINCT
contextid
FROM
dailylogdata
WHERE
shorturl IN (?)
) AS dd
JOIN
dailylogdata AS d
ON d.PK IN
( SELECT
d10.PK
FROM
dailylogdata AS d10
WHERE
d10.contextid = dd.contextid
AND
d10.shorturl IN (?)
AND
rownum <= 10
ORDER BY
d10.occurrencedate
)