SELECT sub_id, quo_id
FROM cos_emails WITH (nolock)
WHERE quo_id = 999624 AND sub_id = 771336
This query executes in 50 seconds and returns only one record. There are 16747425 records in the table.
How to reduce execution time?
First of all
Show us the execution plan
Pretty sure, that there are missing indexes
If a table scan or a clustered index scan is occurring, it means it will consume a lot of resources and hence take time to return result.
Related
I have this query on the order lines table. Its a fairly large table. I am trying to get quantity shipped by item in the last 365 days. The query works, but is very slow to return results. Should I use a function based index for this? I read a bit about them, but havent work with them much at all.
How can I make this query faster?
select OOL.INVENTORY_ITEM_ID
,SUM(nvl(OOL.shipped_QUANTITY,0)) shipped_QUANTITY_Last_365
from oe_order_lines_all OOL
where ool.actual_shipment_date>=trunc(sysdate)-365
and cancelled_flag='N'
and fulfilled_flag='Y'
group by ool.inventory_item_id;
Explain plan:
Stats are up to date, we regather once a week.
Query taking 30+ minutes to finish.
UPDATE
After adding this index:
The explain plan shows the query is using index now:
The query runs faster but not 'fast.' Completing in about 6 minutes.
UPDATE2
I created a covering index as suggested by Matthew and Gordon:
The query now completes in less than 1 second.
Explain Plan:
I still wonder why or if a function-based index would have also been a viable solution, but I dont have time to play with it right now.
As a rule, using an index that access a "significant" percentage of the rows in your table is slower than a full table scan. Depending on your system, "significant" could be as low as 5% or 10%.
So, think about your data for a minute...
How many rows in OE_ORDER_LINES_ALL are cancelled? (Hopefully not many...)
How many rows are fulfilled? (Hopefully almost all of them...)
How many rows where shipped in the last year? (Unless you have more than 10 years of history in your table, more than 10% of them...)
Put that all together and your query is probably going to have to read at least 10% of the rows in your table. This is very near the threshold where an index is going to be worse than a full table scan (or, at least not much better than one).
Now, if you need to run this query a lot, you have a few options.
Materialized view, possibly for the prior 11 months together with a live query against OE_ORDER_LINES_ALL for the current month-to-date.
A covering index (see below).
You can improve the performance of an index, even one accessing a significant percentage of the table rows, by making it include all the information required by the query -- allowing Oracle to avoid accessing the table at all.
CREATE INDEX idx1 ON OE_ORDER_LINES_ALL
( actual_shipment_date,
cancelled_flag,
fulfilled_flag,
inventory_item_id,
shipped_quantity ) ONLINE;
With an index like that, Oracle can satisfy the query by just reading the index (which is faster because it's much smaller than the table).
For this query:
select OOL.INVENTORY_ITEM_ID,
SUM(OOL.shipped_QUANTITY) as shipped_QUANTITY_Last_365
from oe_order_lines_all OOL
where ool.actual_shipment_date >= trunc(sysdate) - 365 and
cancelled_flag = 'N' and
fulfilled_flag = 'Y'
group by ool.inventory_item_id;
I would recommend starting with an index on oe_order_lines_all(cancelled_flag, fulfilled_flag, actual_shipment_date). That should do a good job in identifying the rows.
You can add the additional columns inventory_item_id and quantity_shipped to the index as well.
Let recapitulate the facts:
a) You access about 300K rows from your table (see cardinality in the 3rd line of the execution plan)
b) you use the FULL TABLE SCAN the get the data
c) the query is very slow
The first thing is to check why is the FULL TABLE SCAN so very slow - if the table is extremly large (check the BYTES in user_segments) you need to optimize the access to your data.
But remember no index will help you the get 300K rows from say 30M total rows.
Index access to 300K rows can take 1/4 of an hour or even more if th eindex is not much used and a large part of it s on the disk.
What you need is partitioning - in your case a range partitioning on actual_shipment_date - for your data size on a monthly or yearly basis.
This will eliminate the need of scaning the old data (partition pruning) and make the query much more effective.
Other possibility - if the number of rows is small, but the table size is very large - you need to reorganize the table to get better full scan time.
I have a query that returns results very fast, seconds. But when I want to fetch all the rows it takes several hours.
If my definition of how long a query takes to run is to fetch all rows, how can one measure this besides actually fetching all the rows?
Would select count (*) on all rows be a good indicator on how long it would take to fetch all rows?
select count(*) is going to likely do a table scan to return the total number of records.
Depending what is in the table and how it is indexed the count(*) would most likely return faster then running a select *
You could run some baselines on your table by using set statistics time on and set statistics io on.
I would also suggest running with client statistics.
Also, try running a top 100, 1000, 10000 with the above turned on.
When I performance tune I like to look at actual execution plan and estimated execution plan
I have below query that is taking on an average more than 5 seconds to fetch the data in a transaction that is triggered in-numerous times via application. I am looking for a hint that can possibly help me reduce the time taken for this query everytime its been fired. My conditions are that I cannot add any indexes or change any settings of application for this query. Hence oracle hints or changing the structure of the query is the only choice I have. Please find below my query.
SELECT SUM(c.cash_flow_amount) FROM CM_CONTRACT_DETAIL a ,CM_CONTRACT b,CM_CONTRACT_CASHFLOW c
WHERE a.country_code = Ip_country_code
AND a.company_code = ip_company_code
AND a.dealer_bp_id = ip_bp_id
AND a.contract_start_date >= ip_start_date
AND a.contract_start_date <= ip_end_date
AND a.version_number = b.current_version
AND a.status_code IN ('00','10')
AND a.country_code = b.country_code
AND a.company_code = b.company_code
AND a.contract_number = b.contract_number
AND a.country_code = c.country_code
AND a.company_code = c.company_code
AND a.contract_number = c.contract_number
AND a.version_number = c.version_number
AND c.cash_flow_type_code IN ('07','13');
The things to know about the tables are that they are all transactional tables and the data of this table keeps changing everyday. They have records in 1 lacs to 10 lacs in numbers.
This is the explain plan currently on the query:
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Hint=RULE
SORT AGGREGATE
TABLE ACCESS BY INDEX ROWID CM_CONTRACT_CASHFLOW
NESTED LOOPS
NESTED LOOPS
TABLE ACCESS BY INDEX ROWID CM_CONTRACT_DETAIL
INDEX RANGE SCAN XIF760CT_CONTRACT_DETAIL
TABLE ACCESS BY INDEX ROWID CM_CONTRACT
INDEX UNIQUE SCAN XPKCM_CONTRACT
INDEX RANGE SCAN XPKCM_CONTRACT_CASHFLOW
Indexes on CM_CONTRACT_DETAIL:
XPKCM_CONTRACT_DETAIL is a composite unique index on country_code, company_code, contract_number and version_number
XIF760CT_CONTRACT_DETAIL is a non unique index on dealer_bp_id
Indexes on CM_CONTRACT:
XPKCM_CONTRACT is a composite unique index on country_code, company_code, contract_number
Indexes on CM_CONTRACT_CASHFLOW:
XPKCM_CONTRACT_CASHFLOW is a composite unique index on country_code, company_code, contract_number and version_number,supply_sequence_number, cash_flow_type_code,payment_date.
Could you please help better this query? Please let me know if anything else about the tables is required on this. Stats are not gathered on this tables either.
Your query plan says HINT=RULE. Why is that? Is this the standard setting in your dbms? Why not make use of the optimizer? You can use /*+CHOOSE*/ for that. This may be all that's needed. (Why are there no Stats on the tables, though?)
EDIT: The above was nonsense. By not gathering any statistics you prevent the optimizer from doing its work. It will always fall back to the good old rules, because it has no basis to calculate costs on and find a better plan. It is strange to see that you voluntarily keep the dbms from getting your queries fast. You can use hints in your queries of course, but be careful always to check and alter them when table data changes significantly. Better gather statistics and have the optimizer doing this work. As to useful hints:
My feeling says: With that many criteria on CM_CONTRACT_DETAIL this should be the driving table. You can force that with /*+LEADING(a)*/. Maybe even use a full table scan on that table /*+FULL(a)*/, which you can still speed up with parallel execution: /*+PARALLEL(a,4)*/.
Good luck :-)
How to reduce the execution time of this simple SQL query in SQL Server?
select *
from companybackup
where tiRecordStatus = 1
and (iAccessCode < 3 or chUpdateBy = SUSER_SNAME())
It has nearly 38,681 rows and taking nearly 10 minutes 23 seconds. The table is having 50 columns and even I created indexes on all columns to reduce the time but it didn't succeed, even I checked with nolock option and all the available solutions but couldn't reduce the execution time.
What might be the issue?
If this is a critical query, you can create an index designed just for it:
CREATE INDEX ix_MyIndexNameHere ON CompanyBackup
(tiRecordStatus, iAccesscode, chUpdateBy)
This will still require a key lookup since you are returning all fields with *, but it's a lot better than a bunch of single-field indexes.
I am wondering what does the SUSER_SNAME() function do, now it gets executed on every row. Can you try getting that function return value to parameter and then execute the query?
If the function is costly this will reduce execution time by a huge margin.
edit. Can anyone tell me why I can not post some SQL clauses and some I can? Cant post DECLARE syntax.
You haven't given much info, but splitting OR into a union of two queries can often help (optimisers have trouble with OR):
select *
from companybackup
where tiRecordStatus = 1
and iAccessCode < 3
union
select *
from companybackup
where tiRecordStatus = 1
and chUpdateBy = SUSER_SNAME())
If this doesn't help, try creating these indexes:
create index idx1 on companybackup(tiRecordStatus, iAccessCode);
create index idx2 on companybackup(chUpdateBy, tiRecordStatus);
Then force a recalculation if the distribution of index values:
update statistics companybackup;
I received reports that a my report generating application was not working. After my initial investigation, I found that the SQL transaction was timing out. I'm mystified as to why the query for a smaller selection of items would take so much longer to return results.
Quick query (averages 4 seconds to return):
SELECT * FROM Payroll WHERE LINEDATE >= '04-17-2010'AND LINEDATE <= '04-24-2010' ORDER BY 'EMPLYEE_NUM' ASC, 'OP_CODE' ASC, 'LINEDATE' ASC
Long query (averages 1 minute 20 seconds to return):
SELECT * FROM Payroll WHERE LINEDATE >= '04-18-2010'AND LINEDATE <= '04-24-2010' ORDER BY 'EMPLYEE_NUM' ASC, 'OP_CODE' ASC, 'LINEDATE' ASC
I could simply increase the timeout on the SqlCommand, but it doesn't change the fact the query is taking longer than it should.
Why would requesting a subset of the items take longer than the query that returns more data? How can I optimize this query?
Most probably, the longer range makes the optimizer to select a full table scan with a sort instead of an index scan, which turns out to be faster.
The index traversal can take longer than a table scan for numerous reasons.
For instance, it is probable that the table itself fits completely into the cache but not the table and the index at the same time.
Besides not having an index on the LINEDATE column.
This depends on the database server you use some maintain statistics which influence the queryplan to optimize access.
For Informix i.g. you would do a UPDATE STATISTICS statment.
And could examine the costs in a query-plan using SET EXPLAIN ON.
You should check your documentation for similiar statements.