Related
I have 2 tables.
create table person
(
ID integer,
a_number varchar(9),
first_name varchar(25),
last_name varchar(25),
etc ...
);
create table number_in_ranges_mv
( range_id number(9,0) ,
begin_range number(9,0),
end_range number(9,0)
)
I need to retrieve all the a_numbers that are in a specific ranges.
I have the following query
select nums.range_id, count(p. a_number)
from number_in_ranges nums
left join person p on to_number(p. a_number)
between nums.begin_range and nums.end_range
group by nums.range_id;
but due to the person table having around 100 mill records this query is very slow.
Here is the query plan
Plan hash value: 497207773
-------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 8899 | 234K| | 594K (32)| 00:00:24 |
| 1 | HASH GROUP BY | | 8899 | 234K| | 594K (32)| 00:00:24 |
| 2 | MERGE JOIN OUTER | | 1918M| 48G| | 520K (22)| 00:00:21 |
| 3 | SORT JOIN | | 8899 | 147K| | 28 (4)| 00:00:01 |
| 4 | MAT_VIEW ACCESS FULL | NUMBER_IN_RANGES_MV| 8899 | 147K| | 27 (0)| 00:00:01 |
|* 5 | FILTER | | | | | | |
|* 6 | SORT JOIN | | 86M| 822M| 2642M| 412K (1)| 00:00:17 |
| 7 | INDEX FAST FULL SCAN| PERSON_ANBR_IDX | 86M| 822M| | 67694 (1)| 00:00:03 |
-------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
5 - filter("NUMS"."END_RANGE">=TO_NUMBER("A_NUMBER"(+)))
6 - access("NUMS"."BEGIN_RANGE"<=TO_NUMBER("A_NUMBER"(+)))
filter("NUMS"."BEGIN_RANGE"<=TO_NUMBER("A_NUMBER"(+)))
How can I improve this query?
Thank you!
If each range has a low percentage of related rows in the person table (less than 5%, ideally less than 1%) then a functional index can help the query performance. A straight index on a_number won't help at all.
The most straighforward solution would be to add an index on the conversion expression. For example:
create index ix1 on person (to_number(a_number));
Now, if for every range the percentage of matching rows is higher than 5% then this index won't probably be of help. In that case there would still be hope for a merge join, though, but that's a different story.
Though you can have an index on range_id, a_number etc basis column used intensively but alternatively you can Select only a_number column from person like below in left join to improve the existing performance to some extent
select nums.range_id, count(p. a_number)
from number_in_ranges nums
left join (Select distinct a_number from person) p on
to_number(p.
a_number)
between nums.begin_range and nums.end_range
group by nums.range_id;
My requirement is to find the idle period for the each customer.To find the idle customer first i have to fetch the
registration table and it has 1 million records. To find out the last transaction time for each customer i have to
join the transaction log table it has 60 million records.Below is my query for that.
SELECT CUSTOMERNAME,MOBILENUMBER,ACCOUNTNUMBER,
CUSTOMERID,LASTTXNDATE,
FLOOR(SYSDATE - to_date(TO_CHAR(LASTTXNDATE, 'DD/MM/YYYY'),'DD/MM/YYYY')) AS "IDLE DAYS"
FROM REGN_MAST
LEFT JOIN
( SELECT TXNMOBILENUMBER,MAX(TXNDT) AS LASTTXNDATE
FROM TXN_DETL
GROUP BY TXNMOBILENUMBER
)
ON MOBILENUMBER=TXNMOBILENUMBER;
explain plan for
SELECT CUSTOMERNAME,MOBILENUMBER,ACCOUNTNUMBER,
CUSTOMERID,LASTTXNDATE,
FLOOR(SYSDATE - to_date(TO_CHAR(LASTTXNDATE, 'DD/MM/YYYY'),'DD/MM/YYYY')) AS "IDLE DAYS"
FROM REGN_MAST
LEFT JOIN
( SELECT TXNMOBILENUMBER,MAX(TXNDT) AS LASTTXNDATE
FROM TXN_DETL
GROUP BY TXNMOBILENUMBER
)
ON MOBILENUMBER=TXNMOBILENUMBER;
Plan hash value: 403296370
------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1231K| 102M| | 1554K (1)| 05:10:59 | | |
|* 1 | HASH JOIN RIGHT OUTER | | 1231K| 102M| 58M| 1554K (1)| 05:10:59 | | |
| 2 | VIEW | | 1565K| 40M| | 1535K (1)| 05:07:07 | | |
| 3 | HASH GROUP BY | | 1565K| 37M| 2792M| 1535K (1)| 05:07:07 | | |
| 4 | PARTITION RANGE ALL | | 80M| 1926M| | 1321K (1)| 04:24:24 | 1 |1048575|
| 5 | PARTITION HASH ALL | | 80M| 1926M| | 1321K (1)| 04:24:24 | 1 | 4 |
| 6 | TABLE ACCESS FULL | TXN_DETL | 80M| 1926M| | 1321K (1)| 04:24:24 | 1 |1048575|
| 7 | PARTITION RANGE ALL | | 1231K| 70M| | 12237 (1)| 00:02:27 | 1 |1048575|
| 8 | PARTITION HASH ALL | | 1231K| 70M| | 12237 (1)| 00:02:27 | 1 | 4 |
| 9 | TABLE ACCESS BY LOCAL INDEX ROWID| REGN_MAST | 1231K| 70M| | 12237 (1)| 00:02:27 | 1 |1048575|
| 10 | BITMAP CONVERSION TO ROWIDS | | | | | | | | |
| 11 | BITMAP INDEX FULL SCAN | IDX_REGN_MAST_7 | | | | | | 1 |1048575|
------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("MOBILENUMBER"="TXNMOBILENUMBER"(+))
Note
-----
- dynamic sampling used for this statement (level=11)
------------------------------------------------------------------------------------------------------------------------------------------------
This query takes more than 25 minutes.How to improve the performance of this query.
Any help will be greatly appreciated!!!!!!
Your query uses all data from both tables, so the first choice is to chect the execution plan using the FULL TABLE SCAN.
Remember FULL TABLE SCAN is slow, but selecting all rows from a table with an INDEX is much slower...
So you should approach an execotion plan as follows:
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1000K| 60M| | 176K (2)| 00:00:07 |
|* 1 | HASH JOIN OUTER | | 1000K| 60M| 41M| 176K (2)| 00:00:07 |
| 2 | TABLE ACCESS FULL | REGN_MAST | 1000K| 29M| | 1370 (1)| 00:00:01 |
| 3 | VIEW | | 1014K| 30M| | 170K (2)| 00:00:07 |
| 4 | HASH GROUP BY | | 1014K| 16M| 1610M| 170K (2)| 00:00:07 |
| 5 | TABLE ACCESS FULL| TXN_DETL | 60M| 972M| | 49771 (1)| 00:00:02 |
------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("MOBILENUMBER"="TXNMOBILENUMBER"(+))
Depending on your HW and memory configuration the time will vary, but on a recent HW I'd expect elapces time below 10 minutes.
You may further limit it using
a) parallel query
b) keep a materialized view holding the last transaction date
Here my test with generated data leding to 5+ minutes (see below).
So my advice either remove all indexes or hint the FULL and retry.
SQL> set timi on
SQL> set autotrace traceonly
SQL> SELECT CUSTOMERNAME,MOBILENUMBER,ACCOUNTNUMBER,
2 CUSTOMERID,LASTTXNDATE,
3 FLOOR(SYSDATE - to_date(TO_CHAR(LASTTXNDATE, 'DD/MM/YYYY'),'DD/MM/YYYY')
) AS "IDLE DAYS"
4 FROM REGN_MAST
5 LEFT JOIN
6 ( SELECT TXNMOBILENUMBER,MAX(TXNDT) AS LASTTXNDATE
7 FROM TXN_DETL
8 GROUP BY TXNMOBILENUMBER
9 )
10 ON MOBILENUMBER=TXNMOBILENUMBER;
1000000 rows selected.
Elapsed: 00:05:42.23
Sample Data
create table REGN_MAST
as
select
'Name'||rownum CUSTOMERNAME,'00'||rownum MOBILENUMBER, 99*rownum ACCOUNTNUMBER, rownum CUSTOMERID
from dual connect by level <= 1000000;
create table TXN_DETL
as
with cust as (
select
'00'||rownum TXNMOBILENUMBER
from dual connect by level <= 1000000),
trans as (
select DATE'2018-01-01' + rownum TXNDT
from dual connect by level <= 60)
select TXNMOBILENUMBER, TXNDT
from cust CROSS join trans;
I would try rewriting the query as:
SELECT m.CUSTOMERNAME, m.MOBILENUMBER, m.ACCOUNTNUMBER,
m.CUSTOMERID, t.TXNDT,
FLOOR(SYSDATE - TRUNC(TXNDT)) AS IDLE_DAYS
FROM REGN_MAST m JOIN
TXN_DETL t
ON m.MOBILENUMBER = t.TXNMOBILENUMBER
WHERE t.TXNDT = (SELECT MAX(t2.TXNDT) FROM TXN_DETL t2 WHERE m.MOBILENUMBER = t2.TXNMOBILENUMBER);
Then, be sure that you have an index on TXN_DETL(TXNMOBILENUMBER, TXNDT) for performance.
I changed the LEFT JOIN to an INNER JOIN under the assumption that all customers have transactions.
This also simplifies the date arithmetic. That has less to do with performance than readability.
Create a covering index on TXN_DETL(TXNMOBILENUMBER,TXNDT).
According to the execution plan 86% of the cost is for the full table scan on TXN_DETL. If there is an index on all the relevant columns Oracle can use that index as a skinny table. An INDEX FAST FULL SCAN operation might run significantly faster than TABLE ACCESS FULL.
I have a SELECT statement that runs really slow, it's holding back our night process.
The query is: (Please don't comment about the implicit join syntax, this is automatically generated by Informatica that runs this code) :
SELECT *
FROM STG_DIM_CRM_CASES,V_CRM_CASE_ID_EXISTS_IN_DWH,stg_scd_customers_key
WHERE STG_DIM_CRM_CASES.CRM_CASE_ID = V_CRM_CASE_ID_EXISTS_IN_DWH.CASE_ID(+)
AND STG_DIM_CRM_CASES.account_number = stg_scd_customers_key.account_number(+)
and STG_DIM_CRM_CASES.Case_Create_Date between stg_scd_customers_key.start_date(+) and stg_scd_customers_key.end_date(+)
edit: The actual query selects only account_number,start_date,end_date and one other column which is not indexed.
Tables info :
STG_DIM_CRM_CASES
Index - (Account_Number,Case_Create_Date)
size - 270k records.
stg_scd_customers_key
Index - Account_Number,Start_Date,End_Date
Partitioned - End_Date
Size - 500 million records.
V_CRM_CASE_ID_EXISTS_IN_DWH(View) -
select t.case_id
from crm_ps_rc_case t, dim_crm_cases x
where t.case_id=x.crm_case_id;
dim_crm_cases -
Indexed - (crm_case_id)
Size - 100 million .
crm_ps_rc_case -
Size - 270k records
Edit - If it wasn't clear, the view returns 270k records .
The query without the join to stg_scd is taking seconds, looks like it is the part that causing the performance issues, the view runs in seconds too although it is being joined to a 100 Million records table. Right now the query is taking somewhere between 12 to 30 minutes, depends how busy our sources are.
Here is the EXECUTION PLAN :
6 | 0 | SELECT STATEMENT | | 3278K| 1297M| 559K (4)| 02:10:37 | | | | | |
7 | 1 | PX COORDINATOR | | | | | | | | | | |
8 | 2 | PX SEND QC (RANDOM) | :TQ10003 | 3278K| 1297M| 559K (4)| 02:10:37 | | | Q1,03 | P->S | QC (RAND) |
9 |* 3 | HASH JOIN OUTER | | 3278K| 1297M| 559K (4)| 02:10:37 | | | Q1,03 | PCWP | |
10 | 4 | PX RECEIVE | | 29188 | 10M| 50662 (5)| 00:11:50 | | | Q1,03 | PCWP | |
11 | 5 | PX SEND HASH | :TQ10002 | 29188 | 10M| 50662 (5)| 00:11:50 | | | Q1,02 | P->P | HASH |
12 |* 6 | HASH JOIN RIGHT OUTER | | 29188 | 10M| 50662 (5)| 00:11:50 | | | Q1,02 | PCWP | |
13 | 7 | BUFFER SORT | | | | | | | | Q1,02 | PCWC | |
14 | 8 | PX RECEIVE | | 29188 | 370K| 50575 (5)| 00:11:49 | | | Q1,02 | PCWP | |
15 | 9 | PX SEND BROADCAST | :TQ10000 | 29188 | 370K| 50575 (5)| 00:11:49 | | | | S->P | BROADCAST |
16 | 10 | VIEW | V_CRM_CASE_ID_EXISTS_IN_DWH | 29188 | 370K| 50575 (5)| 00:11:49 | | | | | |
17 |* 11 | HASH JOIN | | 29188 | 399K| 50575 (5)| 00:11:49 | | | | | |
18 | 12 | TABLE ACCESS FULL | CRM_PS_RC_CASE | 29188 | 199K| 570 (1)| 00:00:08 | | | | | |
19 | 13 | INDEX FAST FULL SCAN| DIM_CRM_CASES$1PK | 103M| 692M| 48894 (3)| 00:11:25 | | | | | |
20 | 14 | PX BLOCK ITERATOR | | 29188 | 10M| 87 (2)| 00:00:02 | | | Q1,02 | PCWC | |
21 | 15 | TABLE ACCESS FULL | STG_DIM_CRM_CASES | 29188 | 10M| 87 (2)| 00:00:02 | | | Q1,02 | PCWP | |
22 | 16 | BUFFER SORT | | | | | | | | Q1,03 | PCWC | |
23 | 17 | PX RECEIVE | | 515M| 14G| 507K (3)| 01:58:28 | | | Q1,03 | PCWP | |
24 | 18 | PX SEND HASH | :TQ10001 | 515M| 14G| 507K (3)| 01:58:28 | | | | S->P | HASH |
25 | 19 | PARTITION RANGE ALL | | 515M| 14G| 507K (3)| 01:58:28 | 1 | 2982 | | | |
26 | 20 | TABLE ACCESS FULL | STG_SCD_CUSTOMERS_KEY | 515M| 14G| 507K (3)| 01:58:28 | 1 | 2982 | | | |
27 ------------------------------------------------------------------------------------------------------------------------------------------------------------
28
29 Predicate Information (identified by operation id):
30 ---------------------------------------------------
31
32 3 - access("STG_DIM_CRM_CASES"."ACCOUNT_NUMBER"="STG_SCD_CUSTOMERS_KEY"."ACCOUNT_NUMBER"(+))
33 filter("STG_DIM_CRM_CASES"."CASE_CREATE_DATE">="STG_SCD_CUSTOMERS_KEY"."START_DATE"(+) AND
34 "STG_DIM_CRM_CASES"."CASE_CREATE_DATE"<="STG_SCD_CUSTOMERS_KEY"."END_DATE"(+))
35 6 - access("STG_DIM_CRM_CASES"."CRM_CASE_ID"="V_CRM_CASE_ID_EXISTS_IN_DWH"."CASE_ID"(+))
36 11 - access("T"."CASE_ID"="X"."CRM_CASE_ID")
Notes: Adding indexes may be an issue, depends on the index. This is not the only place this tables are being used, so indexes may interfere with other commands(Inserts mostly) on these tables.
I've also tried adding a filter on stg_scd and excluding all the dates smaller than the minimum date in Table_Cases, but that didn't help because it filtered only 1 year of records.
Thanks in advance.
What I believe to be happening is the engine is having to resolve the 100m+ records from view join to 500m records BEFORE it applies limiting criteria (thus it creates a cross join and even if it can use indexes that's a lot of records to generate then parse. So even though you wrote it as an outer join, the engine isn't able to processes it that way (I don't know why)
So at a minimum 100m*500m = 50,000m that's a lot of data to generate and then parse/limit.
By eliminating the view, the engine may be better able to optimize and use the indexes thus eliminating the need for the 50,000m record join.
Areas where I would focus my time in troubleshooting:
Eliminate the view just to remove it as a potential overhead issue.
Recognize no tie between stg_scd_customers_key and V_CRM_CASE_ID_EXISTS_IN_DWH exists. This means the engine may be doing a cross join BEFORE the results of
STG_DIM_CRM_CASES to stg_scd_customers_key have been resolved.
CONSIDER eliminating the view, or using an inline view
Eliminating the view:
SELECT *
FROM STG_DIM_CRM_CASES
,crm_ps_rc_case t
,dim_crm_cases x
,stg_scd_customers_key
WHERE t.case_id=x.crm_case_id
AND STG_DIM_CRM_CASES.CRM_CASE_ID = t.CASE_ID(+)
AND STG_DIM_CRM_CASES.account_number = stg_scd_customers_key.account_number(+)
AND STG_DIM_CRM_CASES.Case_Create_Date
between stg_scd_customers_key.start_date(+) and stg_scd_customers_key.end_date(+)
using an inline view:
SELECT *
FROM STG_DIM_CRM_CASES
(select t.case_id
from crm_ps_rc_case t, dim_crm_cases x
where t.case_id=x.crm_case_id) V_CRM_CASE_ID_EXISTS_IN_DWH
,stg_scd_customers_key
WHERE STG_DIM_CRM_CASES.CRM_CASE_ID = V_CRM_CASE_ID_EXISTS_IN_DWH.CASE_ID(+)
AND STG_DIM_CRM_CASES.account_number = stg_scd_customers_key.account_number(+)
AND STG_DIM_CRM_CASES.Case_Create_Date
between stg_scd_customers_key.start_date(+) and stg_scd_customers_key.end_date(+)
As to why:
- http://www.dba-oracle.com/art_hints_views.htm
While order of the where clause SHOULDN'T matter consider: On the off chase the engine is executing in the order listed, limiting the 500m down and then adding the supplemental data from the view would logically be faster.
SELECT *
FROM STG_DIM_CRM_CASES,stg_scd_customers_key,V_CRM_CASE_ID_EXISTS_IN_DWH
WHERE STG_DIM_CRM_CASES.account_number = stg_scd_customers_key.account_number(+)
and STG_DIM_CRM_CASES.Case_Create_Date between stg_scd_customers_key.start_date(+) and stg_scd_customers_key.end_date(+)
and STG_DIM_CRM_CASES.CRM_CASE_ID = V_CRM_CASE_ID_EXISTS_IN_DWH.CASE_ID(+)
The problem is in scanning all partitions:
18 | PX SEND HASH | :TQ10001 |
515M| 14G| 507K (3)| 01:58:28 | | | | S->P |
HASH | 25 | 19 | PARTITION RANGE ALL |
| 515M| 14G| 507K (3)| 01:58:28 | 1 | 2982 | |
| | 26 | 20 | TABLE ACCESS FULL |
STG_SCD_CUSTOMERS_KEY | 515M| 14G|
It happens because you are using left join to this table. Can you select 1 partition using bind variable? What is partition key?
I don't see hint for parallel but according to you plan it uses parallel. Is there parallel degree on any object level? Can you remove parallel and post explain plan without parallel please?
I think the problem is the view, which I suspect is completely executing and returning all rows before conditions are being applied.
The overall effect of the view is to add the column CASE_ID that is not null if CRM_CASE_ID is found in it, null otherwise. I've replaced the view with two direct joins and a CASE expression. By replacing the convenience of the view with logic, you can join directly to each table in it and so avoid one level of join depth.
Try running this version of the query:
SELECT
a.*, b.*, c.*,
CASE WHEN t.case_id is not null and X.case_id is not null then t.case_id END CASE_ID
FROM STG_DIM_CRM_CASES a
LEFT JOIN crm_ps_rc_case t
ON t.case_id = a.CRM_CASE_ID
LEFT JOIN dim_crm_cases x
ON x.crm_case_id = a.CRM_CASE_ID
LEFT JOIN V_CRM_CASE_ID_EXISTS_IN_DWH b
ON a.CRM_CASE_ID = b.CASE_ID
LEFT JOIN stg_scd_customers_key c
ON a.account_number = c.account_number
and a.Case_Create_Date between c.start_date and stg_scd_customers_key.end_date
If you replace a.*, b.*, c.* with only the exact columns you actually need, you'll get a speed up because there's simply less data to return. If you also put indexes on looked-up keys plus all the columns you actually select (a covering index), you will speed it up considerably, because index-only access can be used.
You should verify there are indexes in all joined-to columns as a minimum.
Your problem is that Oracle really only has two ways to get the rows it needs from stg_scd_customers_key. Either (A) it does a single FULL SCAN of that table and then filters out the rows it doesn't want or else (B) it does 270,000 index lookups, at 3 to maybe 5 logical I/Os each (depending on the height of your index), plus another 1 logical I/O to actually read the block from the table.
Given the multiblock read and other optimizations available with a FULL SCAN, and based on your table statistics, Oracle's optimizer is guessing that the FULL SCAN would be faster. And there's a good chance that it's right.
What you need to do is give Oracle a better option.
If you cannot use materialized views where you are, a good "poor man's" materialized view is something called a covering index. Now, that's not reasonable for your query, since you do a SELECT *. But do you really need every column from stg_scd_customers_key?
If you can pare down the list of columns you get from stg_scd_customers_key, you can create an index that (A) starts with account_number, start_date, and end_date and (B) includes all the other columns you need to select.
For example:
SELECT stg_im_crm_cases.*, V_CRM_CASE_ID_EXISTS_IN_DWH.*, stg_scd_customers_key.column_1, stg_scd_customers_key.column_2
FROM STG_DIM_CRM_CASES,V_CRM_CASE_ID_EXISTS_IN_DWH,stg_scd_customers_key
WHERE STG_DIM_CRM_CASES.CRM_CASE_ID = V_CRM_CASE_ID_EXISTS_IN_DWH.CASE_ID(+)
AND STG_DIM_CRM_CASES.account_number = stg_scd_customers_key.account_number(+)
and STG_DIM_CRM_CASES.Case_Create_Date between stg_scd_customers_key.start_date(+) and stg_scd_customers_key.end_date(+)
If you could make that your query, and create an index on stg_scd_customers_key (account_number, start_date, end_date, column_1, column_2), then you will have given Oracle a better alternative. Now it can read the index alone, instead of the table.
With tables that big, there are no guarantees until you try it. But covering indexes are often just what the doctor ordered. (All the usual caveats about new indexes apply of course).
some considerations:
1) INDEX
crm_ps_rc_case has no index on case_id this is a problem, you are joining 270k <-> 100m with HASH JOIN (not good)
2) SELECTED COLUMNS
the view V_CRM_CASE_ID_EXISTS_IN_DWH selects t.case_id but it should select x.crm_case_id instead, at least, until you do not resolve indexing of t.case_id. This will spread HASH JOIN over all your execution plan.. (not good)
3) BETWEEN
range joining/filtering is always a problem, especially on large tables but you could restrict the problem adding conditions on range. let me explain, try to add these conditions to your WHERE clause:
AND stg_scd_customers_key.end_date = (
SELECT min(r.end_date)
FROM stg_scd_customers_key r
WHERE r.end_date >= STG_DIM_CRM_CASES.Case_Create_Date
)
AND stg_scd_customers_key.start_date = (
SELECT max(r.start_date)
FROM stg_scd_customers_key r
WHERE r.start_date <= STG_DIM_CRM_CASES.Case_Create_Date
)
yes, it will calculate 270k * 2 subqueries, but the final join will work on much less recs limiting IO operations (it should be better)
4) INDEX COLUMN ORDER
there are conflicting reports if it does, or if it does not matter, but in my experience.. it does.
It could be only a minor improvement but you can try to modify the index on stg_scd_customers_key inverting the order of Start_Date and End_Date, in my experience I have found it more efficient for range filtering to have the upper bound before the lower bound in the index.
I need advice on the attached Query. The query executes for over an hour and has full table scan as per the Explain Plan. I am fairly new to query tuning and would appriciate some advice.
Firstly why would I get a full table scan even though all the columns I use have index created on them.
Secondly, is there any possibility where in I can reduce the execution time, all tables accessed are huge and contain millions of records, even then I would like to scope out some options. Appriciate your help.
Query:
select
distinct rtrim(a.cod_acct_no)||'|'||
a.cod_prod||'|'||
to_char(a.dat_acct_open,'Mon DD YYYY HH:MMAM')||'|'||
a.cod_acct_title||'|'||
a.cod_acct_stat||'|'||
ltrim(to_char(a.amt_od_limit,'99999999999999999990.999999'))||'|'||
ltrim(to_char(a.bal_book,'99999999999999999990.999999'))||'|'||
a.flg_idd_auth||'|'||
a.flg_mnt_status||'|'||
rtrim(c.cod_acct_no)||'|'||
c.cod_10||'|'||
d.nam_branch||'|'||
d.nam_cc_city||'|'||
d.nam_cc_state||'|'||
c.cod_1||'|'||
c.cod_14||'|'||
num_14||'|'||
a.cod_cust||'|'||
c.cod_last_mnt_chkrid||'|'||
c.dat_last_mnt||'|'||
c.ctr_updat_srlno||'|'||
c.cod_20||'|'||
c.num_16||'|'||
c.cod_14||'|'||
c.num_10 ||'|'||
a.flg_classif_reqd||'|'||
(select g.cod_classif_plan_id||'|'||
g.cod_classif_plan_id
from
ac_acct_preferences g
where
a.cod_acct_no=g.cod_acct_no AND g.FLG_MNT_STATUS = 'A' )||'|'||
(select e.dat_cam_expiry from flexprod_host.AC_ACCT_PLAN_CRITERIA e where a.cod_acct_no=e.cod_acct_no and e.FLG_MNT_STATUS ='A')||'|'||
c.cod_23||'|'||
lpad(trim(a.cod_cc_brn),4,0)||'|'||
(select min( o.dat_eff) from ch_acct_od_hist o where a.cod_acct_no=o.cod_acct_no )
from
ch_acct_mast a,
ch_acct_cbr_codes c,
ba_cc_brn_mast d
where
a.flg_mnt_status ='A'
and c.flg_mnt_status ='A'
and a.cod_acct_no= c.cod_acct_no(+)
and a.cod_cc_brn=d.cod_cc_brn
and a.cod_prod in (
299,200,804,863,202,256,814,232,182,844,279,830,802,833,864,
813,862,178,205,801,235,897,231,187,229,847,164,868,805,207,
250,837,274,253,831,893,201,809,846,819,820,845,811,843,285,
894,284,817,832,278,818,810,181,826,867,825,848,871,866,895,
770,806,827,835,838,881,853,188,816,293,298)
Query Plan:
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 4253465430
------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 733K| 125M| | 468K (1)|999:59:59 | | |
| 1 | TABLE ACCESS BY INDEX ROWID | AC_ACCT_PREFERENCES | 1 | 26 | | 3 (0)| 00:01:05 | | |
|* 2 | INDEX UNIQUE SCAN | IN_AC_ACCT_PREFERENCES_1 | 1 | | | 2 (0)| 00:00:43 | | |
| 3 | PARTITION HASH SINGLE | | 1 | 31 | | 3 (0)| 00:01:05 | KEY | KEY |
| 4 | TABLE ACCESS BY LOCAL INDEX ROWID| AC_ACCT_PLAN_CRITERIA | 1 | 31 | | 3 (0)| 00:01:05 | KEY | KEY |
|* 5 | INDEX UNIQUE SCAN | IN_AC_ACCT_PLAN_CRITERIA_1 | 1 | | | 2 (0)| 00:00:43 | KEY | KEY |
| 6 | SORT AGGREGATE | | 1 | 29 | | | | | |
| 7 | FIRST ROW | | 1 | 29 | | 3 (0)| 00:01:05 | | |
|* 8 | INDEX RANGE SCAN (MIN/MAX) | IN_CH_ACCT_OD_HIST_1 | 1 | 29 | | 3 (0)| 00:01:05 | | |
| 9 | HASH UNIQUE | | 733K| 125M| 139M| 468K (1)|999:59:59 | | |
|* 10 | HASH JOIN | | 733K| 125M| | 439K (1)|999:59:59 | | |
|* 11 | TABLE ACCESS FULL | BA_CC_BRN_MAST | 3259 | 136K| | 31 (0)| 00:11:04 | | |
|* 12 | HASH JOIN | | 747K| 97M| 61M| 439K (1)|999:59:59 | | |
| 13 | PARTITION HASH ALL | | 740K| 52M| | 286K (1)|999:59:59 | 1 | 64 |
|* 14 | TABLE ACCESS FULL | CH_ACCT_MAST | 740K| 52M| | 286K (1)|999:59:59 | 1 | 64 |
|* 15 | TABLE ACCESS FULL | CH_ACCT_CBR_CODES | 9154K| 541M| | 117K (1)|699:41:01 | | |
------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("COD_ACCT_NO"=:B1 AND "FLG_MNT_STATUS"='A' AND "COD_ENTITY_VPD"=TO_NUMBER(NVL(SYS_CONTEXT('CLIENTCONTEXT','entity_co
de'),'0')))
5 - access("COD_ACCT_NO"=:B1 AND "FLG_MNT_STATUS"='A' AND "COD_ENTITY_VPD"=TO_NUMBER(NVL(SYS_CONTEXT('CLIENTCONTEXT','entity_co
de'),'0')))
8 - access("COD_ACCT_NO"=:B1)
filter("COD_ENTITY_VPD"=TO_NUMBER(NVL(SYS_CONTEXT('CLIENTCONTEXT','entity_code'),'0')))
10 - access("COD_CC_BRN"="COD_CC_BRN")
11 - filter("COD_ENTITY_VPD"=TO_NUMBER(NVL(SYS_CONTEXT('CLIENTCONTEXT','entity_code'),'0')))
12 - access("COD_ACCT_NO"="COD_ACCT_NO")
14 - filter(("COD_PROD"=164 OR "COD_PROD"=178 OR "COD_PROD"=181 OR "COD_PROD"=182 OR "COD_PROD"=187 OR "COD_PROD"=188 OR
"COD_PROD"=200 OR "COD_PROD"=201 OR "COD_PROD"=202 OR "COD_PROD"=205 OR "COD_PROD"=207 OR "COD_PROD"=229 OR "COD_PROD"=231 OR
"COD_PROD"=232 OR "COD_PROD"=235 OR "COD_PROD"=250 OR "COD_PROD"=253 OR "COD_PROD"=256 OR "COD_PROD"=274 OR "COD_PROD"=278 OR
"COD_PROD"=279 OR "COD_PROD"=284 OR "COD_PROD"=285 OR "COD_PROD"=293 OR "COD_PROD"=298 OR "COD_PROD"=299 OR "COD_PROD"=770 OR
"COD_PROD"=801 OR "COD_PROD"=802 OR "COD_PROD"=804 OR "COD_PROD"=805 OR "COD_PROD"=806 OR "COD_PROD"=809 OR "COD_PROD"=810 OR
"COD_PROD"=811 OR "COD_PROD"=813 OR "COD_PROD"=814 OR "COD_PROD"=816 OR "COD_PROD"=817 OR "COD_PROD"=818 OR "COD_PROD"=819 OR
"COD_PROD"=820 OR "COD_PROD"=825 OR "COD_PROD"=826 OR "COD_PROD"=827 OR "COD_PROD"=830 OR "COD_PROD"=831 OR "COD_PROD"=832 OR
"COD_PROD"=833 OR "COD_PROD"=835 OR "COD_PROD"=837 OR "COD_PROD"=838 OR "COD_PROD"=843 OR "COD_PROD"=844 OR "COD_PROD"=845 OR
"COD_PROD"=846 OR "COD_PROD"=847 OR "COD_PROD"=848 OR "COD_PROD"=853 OR "COD_PROD"=862 OR "COD_PROD"=863 OR "COD_PROD"=864 OR
"COD_PROD"=866 OR "COD_PROD"=867 OR "COD_PROD"=868 OR "COD_PROD"=871 OR "COD_PROD"=881 OR "COD_PROD"=893 OR "COD_PROD"=894 OR
"COD_PROD"=895 OR "COD_PROD"=897) AND "FLG_MNT_STATUS"='A' AND "COD_ENTITY_VPD"=TO_NUMBER(NVL(SYS_CONTEXT('CLIENTCONTEXT','entity_
code'),'0')))
15 - filter("FLG_MNT_STATUS"='A' AND "COD_ENTITY_VPD"=TO_NUMBER(NVL(SYS_CONTEXT('CLIENTCONTEXT','entity_code'),'0')))
Considering each table contains over 100 columns I am limited while uploading the entire table definition. however please find the below details for the columns accessed in the where clause. Hope this helps.
Columns Type Nullable
cod_acct_no CHAR(16) N
FLG_MNT_STATUS CHAR(1) N
cod_23 VARCHAR2(360) Y
cod_cc_brn NUMBER(5) N
cod_prod NUMBER N
I Hope this can bring the cost down.
select
distinct rtrim(a.cod_acct_no)||'|'||
a.cod_prod||'|'||
to_char(a.dat_acct_open,'Mon DD YYYY HH:MMAM')||'|'||
a.cod_acct_title||'|'||
a.cod_acct_stat||'|'||
ltrim(to_char(a.amt_od_limit,'99999999999999999990.999999'))||'|'||
ltrim(to_char(a.bal_book,'99999999999999999990.999999'))||'|'||
a.flg_idd_auth||'|'||
a.flg_mnt_status||'|'||
rtrim(c.cod_acct_no)||'|'||
c.cod_10||'|'||
d.nam_branch||'|'||
d.nam_cc_city||'|'||
d.nam_cc_state||'|'||
c.cod_1||'|'||
c.cod_14||'|'||
num_14||'|'||
a.cod_cust||'|'||
c.cod_last_mnt_chkrid||'|'||
c.dat_last_mnt||'|'||
c.ctr_updat_srlno||'|'||
c.cod_20||'|'||
c.num_16||'|'||
c.cod_14||'|'||
c.num_10 ||'|'||
a.flg_classif_reqd||'|'||
g.cod_classif_plan_id||'|'||g.cod_classif_plan_id
||'|'||
e.dat_cam_expiry ||'|'||
c.cod_23||'|'||
lpad(trim(a.cod_cc_brn),4,0)||'|'||
(select min( o.dat_eff) from ch_acct_od_hist o where a.cod_acct_no=o.cod_acct_no )
from
ch_acct_mast a
JOIN ch_acct_cbr_codes c
ON a.flg_mnt_status ='A'
and c.flg_mnt_status ='A'
and a.cod_acct_no= c.cod_acct_no(+)
JOIN ba_cc_brn_mast d
a.cod_cc_brn=d.cod_cc_brn
JOIN ac_acct_preferences g
ON a.cod_acct_no=g.cod_acct_no AND g.FLG_MNT_STATUS = 'A'
INNER JOIN flexprod_host.AC_ACCT_PLAN_CRITERIA e
ON a.cod_acct_no=e.cod_acct_no and e.FLG_MNT_STATUS ='A'
WHERE a.cod_prod in (
299,200,804,863,202,256,814,232,182,844,279,830,802,833,864,
813,862,178,205,801,235,897,231,187,229,847,164,868,805,207,
250,837,274,253,831,893,201,809,846,819,820,845,811,843,285,
894,284,817,832,278,818,810,181,826,867,825,848,871,866,895,
770,806,827,835,838,881,853,188,816,293,298)
1. Don't fear full table scans. If a large percent of the rows in a table are being accessed it is more efficient to use a hash join/full table scan than a nested loop/index scan.
2. Fix statistics and re-analyze objects. 999 hours to read a table? That's probably an optimizer bug, have a dba look at select * from sys.aux_stats$; for some ridiculous values.
The time isn't very useful, but if one of your forecasted values is so significantly off then you need to check all of them. You should probably re-gather stats on all the relevant tables. Use default settings unless there is a good reason. For example, exec dbms_stats.gather_table_stats('your_schema_name','CH_ACCT_MAST');.
3. Look at cardinalities. Are the Rows estimates in the ballpark? They'll almost never be perfect, but if they are off by more than
an order of magnitude or two it can cause problems. Look for the first significant difference and try to correct it.
4. Code change. #Santhosh had a good idea to re-write using ANSI joins and manually unnest a subquery. Although I think you should
try to unnest the other subquery instead. Oracle can automatically unnest subqueries, but not if subqueries "contain aggregate functions".
5. Disable VPD Looks like this query is being transformed. Make sure you understand exactly what it's doing and why. You may want to disable VPD temporarily, for yourself, while you debug this problem.
6. Parallelism. Since some of these tables are large, you may want to add a parallel hint. But be careful, it is easy to use up a lot
of resources. Try to get the plan right before you do this.
When we join more than 2 tables, oracle or for that matter any database decides to join 2 tables and use the result to join with subsequent tables. Is there a way to identify the intermediate join size. I am particularly interested in oracle. One solution I know is to use Autotrace in sqldeveloper which has the column LAST_OUTPUT_ROWS. But for queries executed by pl/sql and other means does oracle record the intermediate join size in some table?
I am asking this because recently we had a problem as someone dropped the statistics and failed to regenerate it and when traced through we found that oracle formed an intermediate table of 180 million rows before arriving at the final result of 6 rows and the query was quite slow.
Oracle can materialize the intermediate results of a table join in the temporary segment set for your session.
Since it's a one-off table that is deleted after the query is complete, its statistics are not stored.
However, you can estimate its size by building a plan for the query and looking at ROWS parameters of the appropriate operation:
EXPLAIN PLAN FOR
WITH q AS
(
SELECT /*+ MATERIALIZE */
e1.value AS val1, e2.value AS val2
FROM t_even e1, t_even e2
)
SELECT COUNT(*)
FROM q
SELECT *
FROM TABLE(DBMS_XPLAN.display())
Plan hash value: 3705384459
---------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 43G (5)|999:59:59 |
| 1 | TEMP TABLE TRANSFORMATION | | | | | |
| 2 | LOAD AS SELECT | | | | | |
| 3 | MERGE JOIN CARTESIAN | | 100T| 909T| 42G (3)|999:59:59 |
| 4 | TABLE ACCESS FULL | T_ODD | 10M| 47M| 4206 (3)| 00:00:51 |
| 5 | BUFFER SORT | | 10M| 47M| 42G (3)|999:59:59 |
| 6 | TABLE ACCESS FULL | T_ODD | 10M| 47M| 4204 (3)| 00:00:51 |
| 7 | SORT AGGREGATE | | 1 | | | |
| 8 | VIEW | | 100T| | 1729M (62)|999:59:59 |
| 9 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6604_2660595 | 100T| 909T| 1729M (62)|999:59:59 |
---------------------------------------------------------------------------------------------------------
Here, the materialized table is called SYS_TEMP_0FD9D6604_2660595 and the estimated record count is 100T (100,000,000,000,000 records)