I need to optimise this query by creating a object. But I don't know how to do, and I don't understand why using object can optimise this query in this case.
I have a WINE table: (I cannot change the data type in this case)
CREATE TABLE wine (
vintage NUMBER(4) NOT NULL,
wine_no SMALLINT NOT NULL,
vid CHAR(08) NOT NULL,
cid CHAR(06) NOT NULL,
pctalc NUMBER(4, 2),
price NUMBER(6, 2),
grade CHAR(01) NOT NULL,
wname CHAR(40) NOT NULL,
comments CHAR(200) NOT NULL
);
I tried to create object by following this link:https://docs.oracle.com/cd/B19306_01/appdev.102/b14261/objects.htm
but I don't know it is the right track and how to implement
This is the query I need to optimise:
SELECT w.wname,
SUM(w.price) sold_total
FROM wine w
GROUP BY w.wname;
this is my explain plan, and would like to run it faster
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 4045097665
-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 45 | 32 (4)| 00:00:01 |
| 1 | HASH GROUP BY | | 1 | 45 | 32 (4)| 00:00:01 |
| 2 | TABLE ACCESS FULL| WINE | 1500 | 67500 | 31 (0)| 00:00:01 |
-------------------------------------------------------------------------------
9 rows selected.
Any thoughts?
Do I have any way to optimise this query (not change data type)?
Could someone help me and teach me, thank a lot!
This is the query I need to optimise:
SELECT w.wname,
SUM(w.price) sold_total
FROM wine w
GROUP BY w.wname;
How do you expect Oracle to tell you the total price of every single distinct value for WNAME without reading every row in the table and adding everything up?
Answer: it's can't. It's a great database, but it's not magic.
Now, what you can do is give Oracle something else to read instead to get the answer... something smaller than the whole table.
Option 1 - Covering Index
The easy way to do this is to make a so-called "covering" index on the table. A "covering" index is one that contains all of the columns that you use in your query, so that Oracle can use the index instead of the table. E.g.,
CREATE INDEX wine_sum_n1 ON wine (wname, price);
However, in your case, your table rows are not very wide. So, a covering index won't be that much smaller than the actual table. It would help though and it is a very easy approach.
Option 2 - Materialized View with ON QUERY COMPUTATION
Another way to give Oracle a smaller thing to read is to pre-compute all the sums in a materialized view. This is always problematic, because any DML changes to your table will cause the materialized view to become stale and you'll lose the performance benefits unless and until something refreshes it.
(Oracle has an ON COMMIT REFRESH option that avoids this problem, but that has several dangers and limitations. I avoid it for having been burned in the past, but it's still worth reading up on).
Oracle 12.2 introduced a really cool option for materialized views called ON QUERY COMPUTATION. This feature allows Oracle to still use materialized views, even if they are stale, by joining in data from the materialized view log. It could be a good option for you, so I'll give a full example, below.
-- Setup
DROP TABLE wine;
DROP MATERIALIZED VIEW wine_name_sum_mv;
CREATE TABLE wine (
vintage NUMBER(4) NOT NULL,
wine_no SMALLINT NOT NULL,
vid CHAR(08) NOT NULL,
cid CHAR(06) NOT NULL,
pctalc NUMBER(4, 2),
price NUMBER(6, 2),
grade CHAR(01) NOT NULL,
wname CHAR(40) NOT NULL,
comments CHAR(200) NOT NULL
);
INSERT INTO wine
SELECT mod(rownum,10000) vintage,
rownum wine_No,
'xxxxxxxx' vid,
'yyyyyy' cid,
0 pctalc,
50 price,
'z' grade,
'WINE #' || mod(rownum,100) wname,
'made up data for wine' comments
FROM DUAL
CONNECT BY ROWNUM <= 100000;
COMMIT;
CREATE MATERIALIZED VIEW LOG ON wine
WITH ROWID
(wname, price)
INCLUDING NEW VALUES;
CREATE MATERIALIZED VIEW wine_name_sum_mv
REFRESH FAST ON DEMAND
ENABLE QUERY REWRITE
ENABLE ON QUERY COMPUTATION
AS
SELECT w.wname,
sum(w.price) sold_total
FROM wine w
GROUP BY w.wname;
-- Verify material view is being used
EXPLAIN PLAN
SET STATEMENT_ID = 'MMCP001' FOR
SELECT w.wname,
SUM(w.price) sold_total
FROM wine w
GROUP BY w.wname;
-------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 100 | 4400 | 3 (0)| 00:00:01 |
| 1 | MAT_VIEW REWRITE ACCESS FULL| WINE_NAME_SUM_MV | 100 | 4400 | 3 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------
-- Run the INSERT again to change the underlying table
INSERT INTO wine
SELECT mod(rownum,10000) vintage,
rownum wine_No,
'xxxxxxxx' vid,
'yyyyyy' cid,
0 pctalc,
50 price,
'z' grade,
'WINE #' || mod(rownum,100) wname,
'made up data for wine' comments
FROM DUAL
CONNECT BY ROWNUM <= 100000;
-- Verify whether material view is still being used
EXPLAIN PLAN
SET STATEMENT_ID = 'MMCP001' FOR
SELECT w.wname,
SUM(w.price) sold_total
FROM wine w
GROUP BY w.wname;
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 210 | 11550 | 30 (14)| 00:00:01 |
| 1 | VIEW | | 210 | 11550 | 30 (14)| 00:00:01 |
| 2 | UNION-ALL | | | | | |
|* 3 | VIEW | VW_FOJ_0 | 100 | 5800 | 10 (10)| 00:00:01 |
|* 4 | HASH JOIN FULL OUTER | | 100 | 2500 | 10 (10)| 00:00:01 |
| 5 | VIEW | | 10 | 80 | 7 (15)| 00:00:01 |
| 6 | HASH GROUP BY | | 10 | 640 | 7 (15)| 00:00:01 |
|* 7 | TABLE ACCESS FULL | MLOG$_WINE | 1000 | 64000 | 6 (0)| 00:00:01 |
| 8 | VIEW | | 100 | 1700 | 3 (0)| 00:00:01 |
| 9 | MAT_VIEW ACCESS FULL | WINE_NAME_SUM_MV | 100 | 4400 | 3 (0)| 00:00:01 |
|* 10 | VIEW | VW_FOJ_1 | 100 | 7100 | 10 (10)| 00:00:01 |
|* 11 | HASH JOIN FULL OUTER | | 100 | 3700 | 10 (10)| 00:00:01 |
| 12 | VIEW | | 10 | 300 | 7 (15)| 00:00:01 |
| 13 | HASH GROUP BY | | 10 | 640 | 7 (15)| 00:00:01 |
|* 14 | TABLE ACCESS FULL | MLOG$_WINE | 1000 | 64000 | 6 (0)| 00:00:01 |
| 15 | VIEW | | 100 | 700 | 3 (0)| 00:00:01 |
| 16 | MAT_VIEW ACCESS FULL | WINE_NAME_SUM_MV | 100 | 4400 | 3 (0)| 00:00:01 |
| 17 | MERGE JOIN | | 10 | 1150 | 10 (20)| 00:00:01 |
| 18 | MAT_VIEW ACCESS BY INDEX ROWID| WINE_NAME_SUM_MV | 100 | 4400 | 2 (0)| 00:00:01 |
| 19 | INDEX FULL SCAN | I_SNAP$_WINE_NAME_SUM_MV | 100 | | 1 (0)| 00:00:01 |
|* 20 | SORT JOIN | | 10 | 710 | 8 (25)| 00:00:01 |
| 21 | VIEW | | 10 | 710 | 7 (15)| 00:00:01 |
| 22 | HASH GROUP BY | | 10 | 640 | 7 (15)| 00:00:01 |
|* 23 | TABLE ACCESS FULL | MLOG$_WINE | 1000 | 64000 | 6 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter("AV$0"."OJ_MARK" IS NULL)
4 - access(SYS_OP_MAP_NONNULL("SNA$0"."WNAME")=SYS_OP_MAP_NONNULL("AV$0"."GB0"))
7 - filter("MAS$"."SNAPTIME$$">TO_DATE(' 2019-09-19 15:02:46', 'syyyy-mm-dd hh24:mi:ss'))
10 - filter("SNA$0"."SNA_OJ_MARK" IS NULL)
11 - access(SYS_OP_MAP_NONNULL("SNA$0"."WNAME")=SYS_OP_MAP_NONNULL("AV$0"."GB0"))
14 - filter("MAS$"."SNAPTIME$$">TO_DATE(' 2019-09-19 15:02:46', 'syyyy-mm-dd hh24:mi:ss'))
20 - access(SYS_OP_MAP_NONNULL("WNAME")=SYS_OP_MAP_NONNULL("AV$0"."GB0"))
filter(SYS_OP_MAP_NONNULL("WNAME")=SYS_OP_MAP_NONNULL("AV$0"."GB0"))
23 - filter("MAS$"."SNAPTIME$$">TO_DATE(' 2019-09-19 15:02:46', 'syyyy-mm-dd hh24:mi:ss'))
What this is showing is that Oracle still benefits a lot from the materialized view. ON QUERY COMPUTATION seems like a really cool feature that gets us around many of the historical drawbacks of materialized views. DISCLOSURE: I have not used it yet in Production code. There may be pitfalls!
Also, you still want to refresh your materialized views periodically. The more data there is in the materialized view logs, the less ON QUERY COMPUTATION will help you.
Creating a PL/SQL Object type won't do anything to make your query faster.
Here's the plan for your query on a 19c database, no data, no stats, no indexes.
SQL_ID 703yw7hub9rq2, child number 0
-------------------------------------
SELECT w.wname, SUM(w.price) sold_total FROM wine w GROUP BY
w.wname
Plan hash value: 385313506
--------------------------------------------
| Id | Operation | Name | E-Rows |
--------------------------------------------
| 0 | SELECT STATEMENT | | |
| 1 | HASH GROUP BY | | 1 |
| 2 | TABLE ACCESS FULL| WINE | 1 |
--------------------------------------------
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
- Warning: basic plan statistics not available. These are only collected when:
* hint 'gather_plan_statistics' is used for the statement or
* parameter 'statistics_level' is set to 'ALL', at session or system level
For better help on your question, describe your performance problem. Show us the Execution Plan of your problematic SQL. Tell us about your STATS and any indexes you have.
General design feedback: I think what you want for your text columns, such as COMMENTS, is a VARCHAR2 - not a CHAR.
CHAR(8) will always take up 8 bytes (single byte data), even for strings of length 1, 2, 3..7. VARCHAR2() only stores the data as entered.
Related
I am having a hard time understanding why the Oracle CBO is behaving the way it does when a bind variable is part of a OR condition.
My environment
Oracle 12.2 over Red Hat Linux 7
HINT. I am just providing a simplification of the query where the problem is located
$ sqlplus / as sysdba
SQL*Plus: Release 12.2.0.1.0 Production on Thu Jun 10 15:40:07 2021
Copyright (c) 1982, 2016, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
SQL> #test.sql
SQL> var loanIds varchar2(4000);
SQL> exec :loanIds := '100000018330,100000031448,100000013477,100000023115,100000022550,100000183669,100000247514,100000048198,100000268289';
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.00
SQL> SELECT
2 whs.* ,
3 count(*) over () AS TOTAL
4 FROM ALFAMVS.WHS_LOANS whs
5 WHERE
6 ( nvl(:loanIds,'XX') = 'XX' or
7 loanid IN (select regexp_substr(NVL(:loanIds,''),'[^,]+', 1, level) from dual
8 connect by level <= regexp_count(:loanIds,'[^,]+'))
9 )
10 ;
7 rows selected.
Elapsed: 00:00:18.72
Execution Plan
----------------------------------------------------------
Plan hash value: 2980809427
------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 6729 | 6748K| 2621 (1)| 00:00:01 |
| 1 | WINDOW BUFFER | | 6729 | 6748K| 2621 (1)| 00:00:01 |
|* 2 | FILTER | | | | | |
| 3 | TABLE ACCESS FULL | WHS_LOANS | 113K| 110M| 2621 (1)| 00:00:01 |
|* 4 | FILTER | | | | | |
|* 5 | CONNECT BY WITHOUT FILTERING (UNIQUE)| | | | | |
| 6 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter(NVL(:LOANIDS,'XX')='XX' OR EXISTS (SELECT 0 FROM "DUAL" "DUAL" WHERE
SYS_OP_C2C( REGEXP_SUBSTR (NVL(:LOANIDS,''),'[^,]+',1,LEVEL))=:B1 CONNECT BY LEVEL<=
REGEXP_COUNT (:LOANIDS,'[^,]+')))
4 - filter(SYS_OP_C2C( REGEXP_SUBSTR (NVL(:LOANIDS,''),'[^,]+',1,LEVEL))=:B1)
5 - filter(LEVEL<= REGEXP_COUNT (:LOANIDS,'[^,]+'))
Statistics
----------------------------------------------------------
288 recursive calls
630 db block gets
9913 consistent gets
1 physical reads
118724 redo size
13564 bytes sent via SQL*Net to client
608 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
113003 sorts (memory)
0 sorts (disk)
7 rows processed
SQL> set autotrace off
SQL> select count(*) from ALFAMVS.WHS_LOANS ;
COUNT(*)
----------
113095
1 row selected.
Elapsed: 00:00:00.14
KEY POINTS
I do know that if I change the OR expression by using two selects and UNION ALL works perfectly. The problem is that I have a lot of conditions done in the same way, so UNION ALL is not a solution in my case.
The table has statistics up to date calculated with FOR ALL COLUMNS SIZE AUTO and with ESTIMATE PERCENT 10%.
Dynamic SQL is not a solution in my case, because the query is called through a third party software that uses an API Web to convert the result to JSON.
I was able to rephrase the regular expression with connect by level in a way that now takes 19 seconds. Before it was taking 40 seconds.
The table has only 113K records and no indexes.
The query has 20 conditions of this kind, all written in the same way, as the screen in the web app that triggers the query by the API allows the user to use any combination of parameters or none at all.
If I remove the expression NVL(:loanIds,'XX') = 'XX' OR, the query takes 0.01 seconds. Why this OR expression with BINDs is giving such headache to the Optimizer ?
-- UPDATE --
I want to thank #Alex Poole for his suggestions and share with him that the third alternative ( removing the regular expressions ) has worked as a charm. It would be great to understand why, though. You have my most sincere gratitude. I used those for a while and I never had this problem. Also, the suggestion to use regexp_like was even better than the original one with regexp_substr and connect by level, but much slower by far than the one where no regular expressions are used at all
Original query
7 rows selected.
Elapsed: 00:00:36.29
New query
7 rows selected.
Elapsed: 00:00:00.58
Once the EXISTS disappeared of the internal predicate, the query works as fast as hell.
Thank you all for your comments !
From the execution plan the optimiser is, for some reason, re-evaluating the hierarchical query for every row in your table, and then using exists() to see if that row's ID is in the result. It isn't clear why the or is causing that. It might be something to raise with Oracle.
From experimenting I can see three ways to at least partially work around the problem - though I'm sure there are others. The first is to move the CSV expansion to a CTE and then force that to materialize with a hint:
WITH loanIds_cte (loanId) as (
select /*+ materialize */ regexp_substr(:loanIds,'[^,]+', 1, level)
from dual
connect by level <= regexp_count(:loanIds,'[^,]+')
)
SELECT
whs.* ,
count(*) over () AS TOTAL
FROM WHS_LOANS whs
WHERE
( :loanIds is null or
loanid IN (select loanId from loanIds_cte)
)
;
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
Plan hash value: 3226738189
--------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1102 | 9918 | 11 (0)| 00:00:01 |
| 1 | TEMP TABLE TRANSFORMATION | | | | | |
| 2 | LOAD AS SELECT | SYS_TEMP_0FD9FD2A6_198A2E1A | | | | |
|* 3 | CONNECT BY WITHOUT FILTERING| | | | | |
| 4 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
| 5 | WINDOW BUFFER | | 1102 | 9918 | 9 (0)| 00:00:01 |
|* 6 | FILTER | | | | | |
| 7 | TABLE ACCESS FULL | WHS_LOANS | 11300 | 99K| 9 (0)| 00:00:01 |
|* 8 | VIEW | | 1 | 2002 | 2 (0)| 00:00:01 |
| 9 | TABLE ACCESS FULL | SYS_TEMP_0FD9FD2A6_198A2E1A | 1 | 2002 | 2 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter(LEVEL<= REGEXP_COUNT (:LOANIDS,'[^,]+'))
6 - filter(:LOANIDS IS NULL OR EXISTS (SELECT 0 FROM (SELECT /*+ CACHE_TEMP_TABLE ("T1") */ "C0"
"LOANID" FROM "SYS"."SYS_TEMP_0FD9FD2A6_198A2E1A" "T1") "LOANIDS_CTE" WHERE SYS_OP_C2C("LOANID")=:B1))
8 - filter(SYS_OP_C2C("LOANID")=:B1)
That still does the odd transformation to exists(), but at least now that is querying the materialized CTE, so that connect by query is only evaluated one.
Or you could compare each loadId value with the full string using a regular expression:
SELECT
whs.* ,
count(*) over () AS TOTAL
FROM WHS_LOANS whs
WHERE
( :loanIds is null or
regexp_like(:loanIds, '(^|,)' || loanId || '(,|$)')
)
;
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
Plan hash value: 1622376598
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1102 | 9918 | 9 (0)| 00:00:01 |
| 1 | WINDOW BUFFER | | 1102 | 9918 | 9 (0)| 00:00:01 |
|* 2 | TABLE ACCESS FULL| WHS_LOANS | 1102 | 9918 | 9 (0)| 00:00:01 |
--------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter(:LOANIDS IS NULL OR REGEXP_LIKE
(:LOANIDS,SYS_OP_C2C(U'(^|,)'||"LOANID"||U'(,|$)')))
which is slower than the CTE in my testing because regular expression are still expensive and you're doing 113k of them (still, better than 2 x 113k x number-of-elements of them).
Or you can avoid regular expressions and use several separate comparisons:
SELECT
whs.* ,
count(*) over () AS TOTAL
FROM WHS_LOANS whs
WHERE
( :loanIds is null or
:loanIds like loanId || ',%' or
:loanIds like '%,' || loanId or
:loanIds like '%,' || loanId || ',%'
)
;
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
Plan hash value: 1622376598
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2096 | 18864 | 9 (0)| 00:00:01 |
| 1 | WINDOW BUFFER | | 2096 | 18864 | 9 (0)| 00:00:01 |
|* 2 | TABLE ACCESS FULL| WHS_LOANS | 2096 | 18864 | 9 (0)| 00:00:01 |
--------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter(:LOANIDS IS NULL OR :LOANIDS LIKE
SYS_OP_C2C("LOANID"||U',%') OR :LOANIDS LIKE
SYS_OP_C2C(U'%,'||"LOANID") OR :LOANIDS LIKE
SYS_OP_C2C(U'%,'||"LOANID"||U',%'))
which is fastest of those three options in my limited testing. But there may well be better and faster approaches.
Not really relevant, but you seem to be running this as SYS which isn't a good idea, even if the data is in another schema; your loanId column appears to be nvarchar2 (from the SYS_OP_C2C calls), which seems odd for something that could possibly be a number but in any case only seems likely to have ASCII characters; NVL(:loanIds,'') doesn't do anything, since null and empty string are the same in Oracle; and nvl(:loanIds,'XX') = 'XX' can be done as :loanIds is not null which avoid magic values.
My requirement is to find the idle period for the each customer.To find the idle customer first i have to fetch the
registration table and it has 1 million records. To find out the last transaction time for each customer i have to
join the transaction log table it has 60 million records.Below is my query for that.
SELECT CUSTOMERNAME,MOBILENUMBER,ACCOUNTNUMBER,
CUSTOMERID,LASTTXNDATE,
FLOOR(SYSDATE - to_date(TO_CHAR(LASTTXNDATE, 'DD/MM/YYYY'),'DD/MM/YYYY')) AS "IDLE DAYS"
FROM REGN_MAST
LEFT JOIN
( SELECT TXNMOBILENUMBER,MAX(TXNDT) AS LASTTXNDATE
FROM TXN_DETL
GROUP BY TXNMOBILENUMBER
)
ON MOBILENUMBER=TXNMOBILENUMBER;
explain plan for
SELECT CUSTOMERNAME,MOBILENUMBER,ACCOUNTNUMBER,
CUSTOMERID,LASTTXNDATE,
FLOOR(SYSDATE - to_date(TO_CHAR(LASTTXNDATE, 'DD/MM/YYYY'),'DD/MM/YYYY')) AS "IDLE DAYS"
FROM REGN_MAST
LEFT JOIN
( SELECT TXNMOBILENUMBER,MAX(TXNDT) AS LASTTXNDATE
FROM TXN_DETL
GROUP BY TXNMOBILENUMBER
)
ON MOBILENUMBER=TXNMOBILENUMBER;
Plan hash value: 403296370
------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1231K| 102M| | 1554K (1)| 05:10:59 | | |
|* 1 | HASH JOIN RIGHT OUTER | | 1231K| 102M| 58M| 1554K (1)| 05:10:59 | | |
| 2 | VIEW | | 1565K| 40M| | 1535K (1)| 05:07:07 | | |
| 3 | HASH GROUP BY | | 1565K| 37M| 2792M| 1535K (1)| 05:07:07 | | |
| 4 | PARTITION RANGE ALL | | 80M| 1926M| | 1321K (1)| 04:24:24 | 1 |1048575|
| 5 | PARTITION HASH ALL | | 80M| 1926M| | 1321K (1)| 04:24:24 | 1 | 4 |
| 6 | TABLE ACCESS FULL | TXN_DETL | 80M| 1926M| | 1321K (1)| 04:24:24 | 1 |1048575|
| 7 | PARTITION RANGE ALL | | 1231K| 70M| | 12237 (1)| 00:02:27 | 1 |1048575|
| 8 | PARTITION HASH ALL | | 1231K| 70M| | 12237 (1)| 00:02:27 | 1 | 4 |
| 9 | TABLE ACCESS BY LOCAL INDEX ROWID| REGN_MAST | 1231K| 70M| | 12237 (1)| 00:02:27 | 1 |1048575|
| 10 | BITMAP CONVERSION TO ROWIDS | | | | | | | | |
| 11 | BITMAP INDEX FULL SCAN | IDX_REGN_MAST_7 | | | | | | 1 |1048575|
------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("MOBILENUMBER"="TXNMOBILENUMBER"(+))
Note
-----
- dynamic sampling used for this statement (level=11)
------------------------------------------------------------------------------------------------------------------------------------------------
This query takes more than 25 minutes.How to improve the performance of this query.
Any help will be greatly appreciated!!!!!!
Your query uses all data from both tables, so the first choice is to chect the execution plan using the FULL TABLE SCAN.
Remember FULL TABLE SCAN is slow, but selecting all rows from a table with an INDEX is much slower...
So you should approach an execotion plan as follows:
------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1000K| 60M| | 176K (2)| 00:00:07 |
|* 1 | HASH JOIN OUTER | | 1000K| 60M| 41M| 176K (2)| 00:00:07 |
| 2 | TABLE ACCESS FULL | REGN_MAST | 1000K| 29M| | 1370 (1)| 00:00:01 |
| 3 | VIEW | | 1014K| 30M| | 170K (2)| 00:00:07 |
| 4 | HASH GROUP BY | | 1014K| 16M| 1610M| 170K (2)| 00:00:07 |
| 5 | TABLE ACCESS FULL| TXN_DETL | 60M| 972M| | 49771 (1)| 00:00:02 |
------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("MOBILENUMBER"="TXNMOBILENUMBER"(+))
Depending on your HW and memory configuration the time will vary, but on a recent HW I'd expect elapces time below 10 minutes.
You may further limit it using
a) parallel query
b) keep a materialized view holding the last transaction date
Here my test with generated data leding to 5+ minutes (see below).
So my advice either remove all indexes or hint the FULL and retry.
SQL> set timi on
SQL> set autotrace traceonly
SQL> SELECT CUSTOMERNAME,MOBILENUMBER,ACCOUNTNUMBER,
2 CUSTOMERID,LASTTXNDATE,
3 FLOOR(SYSDATE - to_date(TO_CHAR(LASTTXNDATE, 'DD/MM/YYYY'),'DD/MM/YYYY')
) AS "IDLE DAYS"
4 FROM REGN_MAST
5 LEFT JOIN
6 ( SELECT TXNMOBILENUMBER,MAX(TXNDT) AS LASTTXNDATE
7 FROM TXN_DETL
8 GROUP BY TXNMOBILENUMBER
9 )
10 ON MOBILENUMBER=TXNMOBILENUMBER;
1000000 rows selected.
Elapsed: 00:05:42.23
Sample Data
create table REGN_MAST
as
select
'Name'||rownum CUSTOMERNAME,'00'||rownum MOBILENUMBER, 99*rownum ACCOUNTNUMBER, rownum CUSTOMERID
from dual connect by level <= 1000000;
create table TXN_DETL
as
with cust as (
select
'00'||rownum TXNMOBILENUMBER
from dual connect by level <= 1000000),
trans as (
select DATE'2018-01-01' + rownum TXNDT
from dual connect by level <= 60)
select TXNMOBILENUMBER, TXNDT
from cust CROSS join trans;
I would try rewriting the query as:
SELECT m.CUSTOMERNAME, m.MOBILENUMBER, m.ACCOUNTNUMBER,
m.CUSTOMERID, t.TXNDT,
FLOOR(SYSDATE - TRUNC(TXNDT)) AS IDLE_DAYS
FROM REGN_MAST m JOIN
TXN_DETL t
ON m.MOBILENUMBER = t.TXNMOBILENUMBER
WHERE t.TXNDT = (SELECT MAX(t2.TXNDT) FROM TXN_DETL t2 WHERE m.MOBILENUMBER = t2.TXNMOBILENUMBER);
Then, be sure that you have an index on TXN_DETL(TXNMOBILENUMBER, TXNDT) for performance.
I changed the LEFT JOIN to an INNER JOIN under the assumption that all customers have transactions.
This also simplifies the date arithmetic. That has less to do with performance than readability.
Create a covering index on TXN_DETL(TXNMOBILENUMBER,TXNDT).
According to the execution plan 86% of the cost is for the full table scan on TXN_DETL. If there is an index on all the relevant columns Oracle can use that index as a skinny table. An INDEX FAST FULL SCAN operation might run significantly faster than TABLE ACCESS FULL.
I am porting some SQL server procedures to Oracle and find an interesting situation where the Oracle SQL statements are dramatically slower than the identical logic using cursors.
On investigation, I think there may be a particualr problem with 'NOT EXISTS' (maybe?).
Here I put 100k leads into TMP_TXN and then use these as a filter to extract records from Payments for which there is no transaction (see the SQL construct below).
INSERT INTO tmp_txn ....
SELECT ....
FROM txn,
customers
WHERE txn.customer_id = customers.customer_id
AND customers.customer_status LIKE 'A%'
AND txn.txn_date BETWEEN start_date AND end_date;
then insert into tmp_leads where they are in payments but not in TMP_TXN.
INSERT INTO tmp_leads ....
SELECT ....
FROM payments eap, customers
WHERE eap.customer_id = customers.customer_id
AND customers.customer_status LIKE 'A%'
AND NOT EXISTS (SELECT TMP_TXN.CUSTOMER_ID
FROM TMP_TXN
WHERE tmp_txn.customer_id = eap.customer_id
AND ....;
The explain plan is:
Plan hash value: 67643415
-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 665 | 138K| 15450 (1)| 00:03:06 |
| 1 | LOAD TABLE CONVENTIONAL | TMP_LEADS | | | | |
|* 2 | FILTER | | | | | |
|* 3 | HASH JOIN | | 665 | 138K| 14785 (1)| 00:02:58 |
|* 4 | TABLE ACCESS FULL | CUSTOMER_TYPES | 6 | 36 | 3 (0)| 00:00:01 |
|* 5 | HASH JOIN | | 726 | 146K| 14781 (1)| 00:02:58 |
|* 6 | TABLE ACCESS FULL | EDM_SEGMENTS_EVENTS | 23 | 414 | 5 (0)| 00:00:01 |
| 7 | NESTED LOOPS | | | | | |
| 8 | NESTED LOOPS | | 1297 | 239K| 14776 (1)| 00:02:58 |
|* 9 | TABLE ACCESS FULL | EDM_AGREEMENT_PAYMENTS | 1297 | 158K| 12180 (1)| 00:02:27 |
|* 10 | INDEX UNIQUE SCAN | PK_CUSTOMERS | 1 | | 1 (0)| 00:00:01 |
|* 11 | TABLE ACCESS BY INDEX ROWID| CUSTOMERS | 1 | 64 | 2 (0)| 00:00:01 |
|* 12 | TABLE ACCESS BY INDEX ROWID | TMP_TXN | 1 | 81 | 2 (0)| 00:00:01 |
|* 13 | INDEX RANGE SCAN | IX_TMP_TXN_TXN_CODE | 1 | | 2 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter( NOT EXISTS (SELECT 0 FROM "TMP_TXN" "TMP_TXN" WHERE "TMP_TXN"."TXN_CODE"=:B1 AND
"TMP_TXN"."CUSTOMER_ID"=:B2 AND "TMP_TXN"."AGREEMENT_ID"=:B3 AND ("TMP_TXN"."AMOUNT"<0 AND
"TMP_TXN"."AMOUNT">=:B4*1.1 AND "TMP_TXN"."AMOUNT"<=:B5*.9 OR "TMP_TXN"."AMOUNT">0 AND
"TMP_TXN"."AMOUNT">=:B6*0.9 AND "TMP_TXN"."AMOUNT"<=:B7*1.1)))
3 - access("CUSTOMERS"."CUSTOMER_TYPE"="CUSTOMER_TYPES"."CUSTOMER_TYPE")
4 - filter("CUSTOMER_TYPES"."ACTIVE"=U'1')
5 - access("CUSTOMERS"."CUSTOMER_SEGMENT"="EDM_SEGMENTS_EVENTS"."SEGMENT")
6 - filter("EDM_SEGMENTS_EVENTS"."EVENT_ID"=607 AND "EDM_SEGMENTS_EVENTS"."ACTIVE"=U'1')
9 - filter(ROUND("EAP"."PMNT_DAY",0)>=19 AND ROUND("EAP"."PMNT_DAY",0)<=31 AND
"EAP"."PERIODICITY"=U'M' AND "EAP"."EVENT_ID"=607)
10 - access("EAP"."CUSTOMER_ID"="CUSTOMERS"."CUSTOMER_ID")
11 - filter("CUSTOMERS"."CUSTOMER_STATUS" LIKE U'A%')
12 - filter("TMP_TXN"."CUSTOMER_ID"=:B1 AND "TMP_TXN"."AGREEMENT_ID"=:B2 AND
("TMP_TXN"."AMOUNT"<0 AND "TMP_TXN"."AMOUNT">=:B3*1.1 AND "TMP_TXN"."AMOUNT"<=:B4*.9 OR
"TMP_TXN"."AMOUNT">0 AND "TMP_TXN"."AMOUNT">=:B5*0.9 AND "TMP_TXN"."AMOUNT"<=:B6*1.1))
13 - access("TMP_TXN"."TXN_CODE"=:B1)
There is an index on tmp_txn (customer_id), and there are about 100k records in the table. Oracle has 20gb SGA and 20gb PGA, so this should be cached easily.
Resource plan screenshot
Here you can see the script running but not using any resources (data reads <100k/sec!).
The (possible) problem seems to be in the NOT EXISTS in that this selection takes >1000 seconds with almost no access of the data tables (resource monitor).
Stats view in OM showing 898 seconds and 100% cpu
Am I doing something stupid in Oracle? This works well (and quickly) in SQL Server.
Try using left join instead of creating and loading temptxn then joining
Insert into tmpleads
Select ...
From payments eap
Inner join customers cust
On cust.customerid = eap.customerid
LEFT JOIN temptxn txn
On txn.customerid = eap.customerid
Where txn.customerid IS NULL
NOT EXISTS should be used if the inner query result set is huge.Please try the below
INSERT INTO tmp_leads ....
SELECT ....
FROM payments eap, customers
WHERE eap.customer_id = customers.customer_id
AND customers.customer_status LIKE 'A%'
AND customers.customer_id NOT IN (SELECT TMP_TXN.CUSTOMER_ID
FROM TMP_TXN
WHERE
--tmp_txn.customer_id = eap.customer_id
--AND
....;
I have a query which is running fine and giving me output, Here the problem is, same query is taking different elapsed times to get comeplete it's run. The avereage elapased time is 10 mins, but some times it is taking more than a hour and query using sql_profile to get the best execution plan and it is forced every time by DBA.
INSERT INTO DATA_UPDATE_EVENT
(
structure_definition_id,
eod_run_id,
publish_group_name,
JMSDBUS_DESTINATION,
dbaxbuild_location,
LOCATION,
original_data_type)
SELECT DISTINCT eod_structure_definition_id,
p_eod_run_id,
p_publish_group,
pg.JMSDBUS_DESTINATION,
pg.dbaxbuild_location,
pg.LOCATION,
sd.DATA_TYPE
FROM
PUBLISH_GROUP pg,
STRUCTURE_EOD_MAPPING sem,
WATCH_LIST_STRUCTURE wls,
STRUCTURE_DEFINITION sd
WHERE pg.publish_group_name = sem.publish_group_name
AND sem.publish_group_name = p_publish_group
AND wls.structure_definition_id = sem.structure_definition_id
AND wls.watch_list_id IN (SELECT watch_list_id
FROM TMP_WATCHLIST)
AND sd.structure_definition_id = sem.structure_definition_id
AND (sd.defcurve_name IS NULL
OR sd.defcurve_name IN (SELECT curve_shortname
FROM
DEFCURVE_CURRENT
WHERE CURVE_STATUS = 'live')
)
AND (sd.generic_class_name is null
or sd.generic_class_name <> 'CREDIT'
or (
sd.generic_class_name = 'CREDIT'
and generic_name in
(
select generic_name
from
analytic_object ao,
analytic_object_instance aoi,
analytic_object_property aop,
defcurve_current dc
where ao.analytic_object_id = aoi.analytic_object_id
and aop.analytic_object_instance_id =aoi.analytic_object_instance_id
and AOP.PROPERTY_NAME = 'CreditObjectName'
and aop.prop_value1 = dc.curve_shortname
and aop.effective_to > systimestamp
and aop.effective_from < systimestamp
and dc.curve_status = 'live'
and aoi.analytic_object_instance_id in
(select analytic_object_instance_id
from
analytic_object_property
where property_name = 'CreditObjectType'
)
)
)
);
Please take the execution plan for the above query
Execution Plan
------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time
------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 232 (100)|
| 1 | VIEW | VW_DIS_1 | 1 | 4052 | 232 (1)| 00:00:
| 2 | SORT UNIQUE | | 1 | 186 | 232 (1)| 00:00:
| 3 | FILTER | | | | |
| 4 | NESTED LOOPS | | 1 | 186 | 231 (0)| 00:00:
| 5 | NESTED LOOPS | | 1 | 158 | 229 (0)| 00:00:
| 6 | NESTED LOOPS | | 1 | 132 | 228 (0)| 00:00:
| 7 | NESTED LOOPS | | 15 | 1830 | 3 (0)| 00:00:
| 8 | TABLE ACCESS BY INDEX ROWID | PUBLISH_GROUP | 1 | 109 | 1 (0)| 00:00:
| 9 | INDEX UNIQUE SCAN | PK$PUBLISHGROUP | 1 | | 0 (0)|
| 10 | TABLE ACCESS FULL | TMP_WATCHLIST | 15 | 195 | 2 (0)| 00:00:
| 11 | INDEX RANGE SCAN | PK$WATCHLISTSTRUCTURE | 1322 | 13220 | 15 (0)| 00:00:
| 12 | INDEX UNIQUE SCAN | PK$STRUCTURE_EOD_MAPPING | 1 | 26 | 1 (0)| 00:00:
| 13 | TABLE ACCESS BY INDEX ROWID | STRUCTURE_DEFINITION | 1 | 28 | 2 (0)| 00:00:
| 14 | INDEX UNIQUE SCAN | PK$STRUCTURE_DEFINITION | 1 | | 1 (0)| 00:00:
| 15 | TABLE ACCESS BY INDEX ROWID | DEFCURVE_CURRENT | 1 | 22 | 2 (0)| 00:00:
| 16 | INDEX UNIQUE SCAN | PK$DEFCURVE_CURRENT | 1 | | 1 (0)| 00:00:
| 17 | PX COORDINATOR | | | | |
| 18 | PX SEND QC (RANDOM) | :TQ10000 | 1 | 118 | 5023 (1)| 00:01:
| 19 | NESTED LOOPS | | 1 | 118 | 5023 (1)| 00:01:
| 20 | NESTED LOOPS | | 1 | 96 | 5023 (1)| 00:01:
| 21 | NESTED LOOPS | | 135 | 6480 | 4939 (1)| 00:01:
| 22 | NESTED LOOPS | | 8343 | 236K| 1228 (1)| 00:00:
| 23 | PX BLOCK ITERATOR | | | | |
| 24 | TABLE ACCESS FULL | ANALYTIC_OBJECT | 28 | 504 | 231 (1)| 00:00:
| 25 | TABLE ACCESS BY GLOBAL INDEX ROWID| ANALYTIC_OBJECT_INSTANCE | 300 | 3300 | 298 (0)| 00:00:
| 26 | INDEX RANGE SCAN | UQ$ANALYTIC_OBJECT_INSTANCE | 300 | | 3 (0)| 00:00:
| 27 | INDEX UNIQUE SCAN | PK$ANALYTIC_OBJECT_PROPERTY | 1 | 19 | 2 (0)| 00:00:
| 28 | TABLE ACCESS BY GLOBAL INDEX ROWID | ANALYTIC_OBJECT_PROPERTY | 1 | 48 | 3 (0)| 00:00:
| 29 | INDEX UNIQUE SCAN | PK$ANALYTIC_OBJECT_PROPERTY | 1 | | 2 (0)| 00:00:
| 30 | TABLE ACCESS BY INDEX ROWID | DEFCURVE_CURRENT | 1 | 22 | 1 (0)| 00:00:
| 31 | INDEX UNIQUE SCAN | PK$DEFCURVE_CURRENT | 1 | | 0 (0)|
------------------------------------------------------------------------------------------------------------------------
Note
-----
- dynamic sampling used for this statement
- SQL profile "SYS_SQLPROF_01505a8ce6144000" used for this statement
Can some one please suggest how to achive this with good steps and what we need to ask DBA to provide the information.
Either you or your DBA need to understand your data and your system. This is the most basic principle of tuning.
Queries will perform predictably, provided the environment is stable. If run times vary wildly then you need to find what is different. Erratically poor performance may be due to lots of other users contending for system resource at particular times or it might be due to variations in the volume or nature of the data. There are other possibilities too, but those are the ones to start with.
Your DBA should already monitor the database usage. But if they aren't they need to start right now. As it doesn't seem likely your organization is paying for the Diagnostics option you can use Statspack for this. Find out more.
As for data variation, there is one clue in the posted code:
AND wls.watch_list_id IN (SELECT watch_list_id
FROM TMP_WATCHLIST)
Assuming you adhere to a sensible naming convention (and years of SO have convinced me to be wary of such assumptions) then TMP_WATCHLIST is a temporary table. Which suggests it could hold different data and different volumes of data each time you run the query. If that is the case that would be a good place to start. Depending on the precise problem possible solutions include dynamic sampling, fixed stats or a cardinality hint.
Here are some ideas:
Bind variable? Is p_publish_group a bind variable? If so, is there a histogram on sem.publish_group_name? It looks like this query returns vastly different amounts depending on the input. Adaptive cursor sharing might help, but that would require a histogram.
Bad profile? The cardinality estimates are horrible. If this statement ran for an hour then I would assume there are many millions of rows. But Oracle only expects 1 rows, even with the SQL Profile. Was the profile created on a very small data set and applied on a much larger data set?
Full hints? Even if your optimizer statistics are accurate I bet the cardinality would still be far off. This may be one of those difficult queries that is so weird it requires a large amount of hints. For example, something like /*+ full(pg) full(sem) use_hash(pg sem) ... */. Indexes and nested loops work fine for small amount of data. But if this query runs for an hour then it likely needs full table scans and hash joins.
SQL Monitoring. Run select dbms_sqltune.report_sql_monitor(sql_id => '<your sql id>', type => 'text') from dual; to find out what the query is actually doing and how much time is spent on each operation. The explain plans are only estimates, you need to know what's really happening. I bet when you run this you'll see a few long-running steps where Estimated Rows = 1 and Actual Rows = 1000000.
I need advice on the attached Query. The query executes for over an hour and has full table scan as per the Explain Plan. I am fairly new to query tuning and would appriciate some advice.
Firstly why would I get a full table scan even though all the columns I use have index created on them.
Secondly, is there any possibility where in I can reduce the execution time, all tables accessed are huge and contain millions of records, even then I would like to scope out some options. Appriciate your help.
Query:
select
distinct rtrim(a.cod_acct_no)||'|'||
a.cod_prod||'|'||
to_char(a.dat_acct_open,'Mon DD YYYY HH:MMAM')||'|'||
a.cod_acct_title||'|'||
a.cod_acct_stat||'|'||
ltrim(to_char(a.amt_od_limit,'99999999999999999990.999999'))||'|'||
ltrim(to_char(a.bal_book,'99999999999999999990.999999'))||'|'||
a.flg_idd_auth||'|'||
a.flg_mnt_status||'|'||
rtrim(c.cod_acct_no)||'|'||
c.cod_10||'|'||
d.nam_branch||'|'||
d.nam_cc_city||'|'||
d.nam_cc_state||'|'||
c.cod_1||'|'||
c.cod_14||'|'||
num_14||'|'||
a.cod_cust||'|'||
c.cod_last_mnt_chkrid||'|'||
c.dat_last_mnt||'|'||
c.ctr_updat_srlno||'|'||
c.cod_20||'|'||
c.num_16||'|'||
c.cod_14||'|'||
c.num_10 ||'|'||
a.flg_classif_reqd||'|'||
(select g.cod_classif_plan_id||'|'||
g.cod_classif_plan_id
from
ac_acct_preferences g
where
a.cod_acct_no=g.cod_acct_no AND g.FLG_MNT_STATUS = 'A' )||'|'||
(select e.dat_cam_expiry from flexprod_host.AC_ACCT_PLAN_CRITERIA e where a.cod_acct_no=e.cod_acct_no and e.FLG_MNT_STATUS ='A')||'|'||
c.cod_23||'|'||
lpad(trim(a.cod_cc_brn),4,0)||'|'||
(select min( o.dat_eff) from ch_acct_od_hist o where a.cod_acct_no=o.cod_acct_no )
from
ch_acct_mast a,
ch_acct_cbr_codes c,
ba_cc_brn_mast d
where
a.flg_mnt_status ='A'
and c.flg_mnt_status ='A'
and a.cod_acct_no= c.cod_acct_no(+)
and a.cod_cc_brn=d.cod_cc_brn
and a.cod_prod in (
299,200,804,863,202,256,814,232,182,844,279,830,802,833,864,
813,862,178,205,801,235,897,231,187,229,847,164,868,805,207,
250,837,274,253,831,893,201,809,846,819,820,845,811,843,285,
894,284,817,832,278,818,810,181,826,867,825,848,871,866,895,
770,806,827,835,838,881,853,188,816,293,298)
Query Plan:
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 4253465430
------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 733K| 125M| | 468K (1)|999:59:59 | | |
| 1 | TABLE ACCESS BY INDEX ROWID | AC_ACCT_PREFERENCES | 1 | 26 | | 3 (0)| 00:01:05 | | |
|* 2 | INDEX UNIQUE SCAN | IN_AC_ACCT_PREFERENCES_1 | 1 | | | 2 (0)| 00:00:43 | | |
| 3 | PARTITION HASH SINGLE | | 1 | 31 | | 3 (0)| 00:01:05 | KEY | KEY |
| 4 | TABLE ACCESS BY LOCAL INDEX ROWID| AC_ACCT_PLAN_CRITERIA | 1 | 31 | | 3 (0)| 00:01:05 | KEY | KEY |
|* 5 | INDEX UNIQUE SCAN | IN_AC_ACCT_PLAN_CRITERIA_1 | 1 | | | 2 (0)| 00:00:43 | KEY | KEY |
| 6 | SORT AGGREGATE | | 1 | 29 | | | | | |
| 7 | FIRST ROW | | 1 | 29 | | 3 (0)| 00:01:05 | | |
|* 8 | INDEX RANGE SCAN (MIN/MAX) | IN_CH_ACCT_OD_HIST_1 | 1 | 29 | | 3 (0)| 00:01:05 | | |
| 9 | HASH UNIQUE | | 733K| 125M| 139M| 468K (1)|999:59:59 | | |
|* 10 | HASH JOIN | | 733K| 125M| | 439K (1)|999:59:59 | | |
|* 11 | TABLE ACCESS FULL | BA_CC_BRN_MAST | 3259 | 136K| | 31 (0)| 00:11:04 | | |
|* 12 | HASH JOIN | | 747K| 97M| 61M| 439K (1)|999:59:59 | | |
| 13 | PARTITION HASH ALL | | 740K| 52M| | 286K (1)|999:59:59 | 1 | 64 |
|* 14 | TABLE ACCESS FULL | CH_ACCT_MAST | 740K| 52M| | 286K (1)|999:59:59 | 1 | 64 |
|* 15 | TABLE ACCESS FULL | CH_ACCT_CBR_CODES | 9154K| 541M| | 117K (1)|699:41:01 | | |
------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("COD_ACCT_NO"=:B1 AND "FLG_MNT_STATUS"='A' AND "COD_ENTITY_VPD"=TO_NUMBER(NVL(SYS_CONTEXT('CLIENTCONTEXT','entity_co
de'),'0')))
5 - access("COD_ACCT_NO"=:B1 AND "FLG_MNT_STATUS"='A' AND "COD_ENTITY_VPD"=TO_NUMBER(NVL(SYS_CONTEXT('CLIENTCONTEXT','entity_co
de'),'0')))
8 - access("COD_ACCT_NO"=:B1)
filter("COD_ENTITY_VPD"=TO_NUMBER(NVL(SYS_CONTEXT('CLIENTCONTEXT','entity_code'),'0')))
10 - access("COD_CC_BRN"="COD_CC_BRN")
11 - filter("COD_ENTITY_VPD"=TO_NUMBER(NVL(SYS_CONTEXT('CLIENTCONTEXT','entity_code'),'0')))
12 - access("COD_ACCT_NO"="COD_ACCT_NO")
14 - filter(("COD_PROD"=164 OR "COD_PROD"=178 OR "COD_PROD"=181 OR "COD_PROD"=182 OR "COD_PROD"=187 OR "COD_PROD"=188 OR
"COD_PROD"=200 OR "COD_PROD"=201 OR "COD_PROD"=202 OR "COD_PROD"=205 OR "COD_PROD"=207 OR "COD_PROD"=229 OR "COD_PROD"=231 OR
"COD_PROD"=232 OR "COD_PROD"=235 OR "COD_PROD"=250 OR "COD_PROD"=253 OR "COD_PROD"=256 OR "COD_PROD"=274 OR "COD_PROD"=278 OR
"COD_PROD"=279 OR "COD_PROD"=284 OR "COD_PROD"=285 OR "COD_PROD"=293 OR "COD_PROD"=298 OR "COD_PROD"=299 OR "COD_PROD"=770 OR
"COD_PROD"=801 OR "COD_PROD"=802 OR "COD_PROD"=804 OR "COD_PROD"=805 OR "COD_PROD"=806 OR "COD_PROD"=809 OR "COD_PROD"=810 OR
"COD_PROD"=811 OR "COD_PROD"=813 OR "COD_PROD"=814 OR "COD_PROD"=816 OR "COD_PROD"=817 OR "COD_PROD"=818 OR "COD_PROD"=819 OR
"COD_PROD"=820 OR "COD_PROD"=825 OR "COD_PROD"=826 OR "COD_PROD"=827 OR "COD_PROD"=830 OR "COD_PROD"=831 OR "COD_PROD"=832 OR
"COD_PROD"=833 OR "COD_PROD"=835 OR "COD_PROD"=837 OR "COD_PROD"=838 OR "COD_PROD"=843 OR "COD_PROD"=844 OR "COD_PROD"=845 OR
"COD_PROD"=846 OR "COD_PROD"=847 OR "COD_PROD"=848 OR "COD_PROD"=853 OR "COD_PROD"=862 OR "COD_PROD"=863 OR "COD_PROD"=864 OR
"COD_PROD"=866 OR "COD_PROD"=867 OR "COD_PROD"=868 OR "COD_PROD"=871 OR "COD_PROD"=881 OR "COD_PROD"=893 OR "COD_PROD"=894 OR
"COD_PROD"=895 OR "COD_PROD"=897) AND "FLG_MNT_STATUS"='A' AND "COD_ENTITY_VPD"=TO_NUMBER(NVL(SYS_CONTEXT('CLIENTCONTEXT','entity_
code'),'0')))
15 - filter("FLG_MNT_STATUS"='A' AND "COD_ENTITY_VPD"=TO_NUMBER(NVL(SYS_CONTEXT('CLIENTCONTEXT','entity_code'),'0')))
Considering each table contains over 100 columns I am limited while uploading the entire table definition. however please find the below details for the columns accessed in the where clause. Hope this helps.
Columns Type Nullable
cod_acct_no CHAR(16) N
FLG_MNT_STATUS CHAR(1) N
cod_23 VARCHAR2(360) Y
cod_cc_brn NUMBER(5) N
cod_prod NUMBER N
I Hope this can bring the cost down.
select
distinct rtrim(a.cod_acct_no)||'|'||
a.cod_prod||'|'||
to_char(a.dat_acct_open,'Mon DD YYYY HH:MMAM')||'|'||
a.cod_acct_title||'|'||
a.cod_acct_stat||'|'||
ltrim(to_char(a.amt_od_limit,'99999999999999999990.999999'))||'|'||
ltrim(to_char(a.bal_book,'99999999999999999990.999999'))||'|'||
a.flg_idd_auth||'|'||
a.flg_mnt_status||'|'||
rtrim(c.cod_acct_no)||'|'||
c.cod_10||'|'||
d.nam_branch||'|'||
d.nam_cc_city||'|'||
d.nam_cc_state||'|'||
c.cod_1||'|'||
c.cod_14||'|'||
num_14||'|'||
a.cod_cust||'|'||
c.cod_last_mnt_chkrid||'|'||
c.dat_last_mnt||'|'||
c.ctr_updat_srlno||'|'||
c.cod_20||'|'||
c.num_16||'|'||
c.cod_14||'|'||
c.num_10 ||'|'||
a.flg_classif_reqd||'|'||
g.cod_classif_plan_id||'|'||g.cod_classif_plan_id
||'|'||
e.dat_cam_expiry ||'|'||
c.cod_23||'|'||
lpad(trim(a.cod_cc_brn),4,0)||'|'||
(select min( o.dat_eff) from ch_acct_od_hist o where a.cod_acct_no=o.cod_acct_no )
from
ch_acct_mast a
JOIN ch_acct_cbr_codes c
ON a.flg_mnt_status ='A'
and c.flg_mnt_status ='A'
and a.cod_acct_no= c.cod_acct_no(+)
JOIN ba_cc_brn_mast d
a.cod_cc_brn=d.cod_cc_brn
JOIN ac_acct_preferences g
ON a.cod_acct_no=g.cod_acct_no AND g.FLG_MNT_STATUS = 'A'
INNER JOIN flexprod_host.AC_ACCT_PLAN_CRITERIA e
ON a.cod_acct_no=e.cod_acct_no and e.FLG_MNT_STATUS ='A'
WHERE a.cod_prod in (
299,200,804,863,202,256,814,232,182,844,279,830,802,833,864,
813,862,178,205,801,235,897,231,187,229,847,164,868,805,207,
250,837,274,253,831,893,201,809,846,819,820,845,811,843,285,
894,284,817,832,278,818,810,181,826,867,825,848,871,866,895,
770,806,827,835,838,881,853,188,816,293,298)
1. Don't fear full table scans. If a large percent of the rows in a table are being accessed it is more efficient to use a hash join/full table scan than a nested loop/index scan.
2. Fix statistics and re-analyze objects. 999 hours to read a table? That's probably an optimizer bug, have a dba look at select * from sys.aux_stats$; for some ridiculous values.
The time isn't very useful, but if one of your forecasted values is so significantly off then you need to check all of them. You should probably re-gather stats on all the relevant tables. Use default settings unless there is a good reason. For example, exec dbms_stats.gather_table_stats('your_schema_name','CH_ACCT_MAST');.
3. Look at cardinalities. Are the Rows estimates in the ballpark? They'll almost never be perfect, but if they are off by more than
an order of magnitude or two it can cause problems. Look for the first significant difference and try to correct it.
4. Code change. #Santhosh had a good idea to re-write using ANSI joins and manually unnest a subquery. Although I think you should
try to unnest the other subquery instead. Oracle can automatically unnest subqueries, but not if subqueries "contain aggregate functions".
5. Disable VPD Looks like this query is being transformed. Make sure you understand exactly what it's doing and why. You may want to disable VPD temporarily, for yourself, while you debug this problem.
6. Parallelism. Since some of these tables are large, you may want to add a parallel hint. But be careful, it is easy to use up a lot
of resources. Try to get the plan right before you do this.