Performance Issue in Oracle (Difference between common table expressions and global temp table) - sql

I am having a performance issue for the one of my queries.
The structure of the query is like below
With a02 as
(...
);
SELECT *
FROM
a02
inner join
a03 on a02.id=a03.id;
Table a02 is around 10000 rows and table a03 is around 40000 rows. The query takes about 1.5 hours to run.
However, if I create a02 as a global temp table and then run the query below, it takes less then 5 mins. Is that a normal behavior?
SELECT *
FROM
a02
inner join
a03 on a02.id=a03.id
I am hesitated to use global temp table as we sometimes get the following message when drop the table:
DROP TABLE A02;
SQL Error: ORA-14452: attempt to create, alter or drop an index on temporary table already in use
14452. 00000 - "attempt to create, alter or drop an index on temporary table already in use"

When you create a global temporary table (or even a local temporary table), Oracle has good statistics on the table -- because you just created it. This can affect the execution plan.
It would seem that Oracle is choosing a suboptimal execution plan for the query. I would suggest creating an index on id in each of the tables -- if possible. Or at least have an index on a03(id).

I would recommend identifying the SQL id for the query then using the SQL Monitor Report as it will tell you exactly what the execution plan is and where the SQL is spending most of it's time.
A simple way to get the SQL Monitor Report from SQL*Plus follows:
spool c:\temp\SQL_Monitor_rpt.html
SET LONG 1000000
SET LONGCHUNKSIZE 1000000
SET LINESIZE 1000
SET PAGESIZE 0
SET TRIM ON
SET TRIMSPOOL ON
SET ECHO OFF
SET FEEDBACK OFF
alter session set "_with_subquery" = optimizer;
SELECT DBMS_SQLTUNE.report_sql_monitor(
sql_id => '&SQLID' ,
type => 'HTML',
report_level => 'ALL') AS report
FROM dual;
spool off
Most likely there is a full table scan going on plus joins without the use of an index. You should probably index the columns in both tables that are involved in the join condition. Also, you can try the use of the /*+ MATERIALIZE */ hint in your WITH clause sub queries to mimic a Global Temporary Table without actually needing one.

Related

Why table with only 6 rows taking around 20 seconds to return the results in Oracle

I have a table with only 6 rows but when i am querying a simple select on that table "select * from table_name" it is taking 20 seconds to return the recordes. I need to understand why it is taking so much time and what can be done to imporve it.
Few information related to table which one can use for answering:
1.) Only Not null constraints are present on two columns of the table. No other constraint.
2.) No index or partitions on the table.
3.) Attached is the execution plan for the table.
Explain Plan Image:
Looks like the segment of the table was extended (a lot of rows were inserted) and then most of rows were deleted.
Read about alter table move update indexes and about alter table shrink space:
https://oracle-base.com/articles/misc/alter-table-shrink-space-online

SQL Server : how do I add a hint to every query against a table?

I have a transactional database. one of the tables is almost empty (A). it has a unique indexed column (x), and no clustered index.
2 concurrent transactions:
begin trans
insert into A (x,y,z) (1,2,3)
WAITFOR DELAY '00:00:02'; -- or manually run the first 2 lines only
select * from A where x=1; -- small tables produce a query plan of table scan here, and block against the transaction below.
commit
begin trans
insert into A (x,y,z) (2,3,4)
WAITFOR DELAY '00:00:02';
-- on a table with 3 or less pages this hint is needed to not block against the above transaction
select * from A with(forceseek) -- force query plan of index seek + rid lookup
where x=2;
commit
My problem is that when the table has very few rows the 2 transactions can deadlock, because SQL Server generates a table scan for the select, even though there is an index, and both wait on the lock held by the newly inserted row of the other transaction.
When there are lots of rows in this table, the query plan changes to an index seek, and both happily complete.
When the table is small, the WITH(FORCESEEK) hint forces the correct query plan (5% more expensive for tiny tables).
is it possible to provide a default hint for all queries on a table to pretend to have the 'forceseek' hint?
the deadlocking code above was generated by Hibernate, is it possible to have hibernate emit the needed query hints?
we can make the tables pretend to be large enough that the query optimizer selects the index seek with the undocumented features in UPDATE STATISTICS http://msdn.microsoft.com/en-AU/library/ms187348(v=sql.110).aspx . Can anyone see any downsides to making all tables with less than 1000 rows pretend they have 1000 rows over 10 pages?
You can create a Plan Guide.
Or you can enable Read Committed Snapshot isolation level in the database.
Better still: make the index clustered.
For small tables that experience high update ratio, perhaps you can apply the advice from Using tables as Queues.
Can anyone see any downsides to making all tables with less than 1000 rows pretend they have 1000 rows over 10 pages?
If the table appears in a another, more complex, query (think joins) then the cardinality estimates may cascade wildly off and produce bad plans.
You could create a view that is a copy of the table but with the hint and have queries use the view instead:
create view A2 as
select * from A with(forceseek)
If you want to preserve the table name used by queries, rename the table to something else then name the view "A":
sp_rename 'A', 'A2';
create view A as
select * from A2 with(forceseek)
Just to add another option you may consider.
You can lock entire table on update by using
ALTER TABLE MyTable SET LOCK_ESCALATION = TABLE
This workaround is fine if you do not have too many updates that will queue and slow performance.
It is table-wide and no updates to other code is needed.

Table Valued Parameters with Estimated Number of Rows 1

I have been searching the internet for hours trying to figure out how to improve the performance of my query using table-valued parameters (TVP).
After hours of searching, I finally determined what I believe is the root of the problem. Upon examining the Estimated Execution plan of my query, I discovered that the estimated number of rows for my query is 1 anytime I use a TVP. If I exchange the TVP for a query that selects the data I am interested in, then the estimated number of rows is much more accurate at around 7400. This significantly increases the performance.
However, in the real scenario, I cannot use a query, I must use a TVP. Is there any way to have SQL Server more accurately predict the number of rows when using a TVP so that a more appropriate plan will be used?
TVPs are Table Variables which don't maintain statistics and hence report only have 1 row. There are two ways to improve statistics on TVPs:
If you have no need to modify any of the values in the TVP or add columns to it to track operational data, then you can do a simple, statement-level OPTION (RECOMPILE) on any query that uses a table variable (TVP or locally created) and is doing more with that table variable than a simple SELECT (i.e. doing INSERT INTO RealTable (columns) SELECT (columns) FROM #TVP; does not need the statement-level recompile). Do the following test in SSMS to see this behavior in action:
DECLARE #TableVariable TABLE (Col1 INT NOT NULL);
INSERT INTO #TableVariable (Col1)
SELECT so.[object_id]
FROM [master].[sys].[objects] so;
-- Control-M to turn on "Include Actual Execution Plan"
SELECT * FROM #TableVariable; -- Estimated Number of Rows = 1 (incorrect)
SELECT * FROM #TableVariable
OPTION (RECOMPILE); -- Estimated Number of Rows = 91 (correct)
SELECT * FROM #TableVariable; -- Estimated Number of Rows = 1 (back to incorrect)
Create a local temporary table (single #) and copy the TVP data to that. While this does duplicate the data in tempdb, the benefits are:
better statistics for a temp table as opposed to table variable (i.e. no need for statement-level recompiles)
ability to add columns
ability to modify values

Why SQL query can take so long time to return results?

I have an SQL query as simple as:
select * from recent_cases where user_id=1000000 and case_id=10095;
It takes up to 0.4 seconds to execute it in Oracle. And when I do 20 requests in a row, it takes > 10s.
The table 'recent_cases' has 4 columns: ID, USER_ID, CASE_ID and VISITED_DATE. Currently there are only 38 records in this table.
Also, there are 3 indexes on this table: on ID column, on USER_ID column, and on (USER_ID, CASE_ID) columns pair.
Any ideas?
One theory -- the table has a very large data segment and high water mark near the end, but the statistics are not prompting the optimiser to use an index. Therefore you're getting a slow full table scan. You could ALTER TABLE ... MOVE and rebuild the indexes to fix such a problem, or COALESCE it.
Oracle Databases have a function called "analyze table". This function can speed up select statements a lot, even if there are just a few rows in the table.
Here are some links which might help you:
http://www.dba-oracle.com/t_oracle_analyze_table.htm
http://docs.oracle.com/cd/B28359_01/server.111/b28310/general002.htm

SQL optimization distinct

Suppose I have an SQL query using Oracle SQo Server to get the data from the database
as below
select distinct ORDER_TYPE_name as ORDER_TYPE from
PRODUCT_MASTER_LIST where PROJECT_ID = 99999
order by ORDER_TYPE ASC
I now have 5000 records with the following order Types:
Red
Yellow
Green
Black
null
Unclassified
How to optimise the performance by shortening the query execution time?
Note When I see the execution plan, there are many full access through the table?
You can define an index on those two columns to prevent table scans. That should bring down the execution time by a significant extent.
CREATE INDEX IX_ProductMasterList_OrderType
ON PRODUCT_MASTER_LIST(PROJECT_ID, ORDER_TYPE);
I think index on PROJECT_ID could be the right solution. It depends on selectivity of this column.
CREATE INDEX PRODUCT_ML_PRODUCT_ID_IDX ON PRODUCT_MASTER_LIST(PRODUCT_ID);
If you plan to run the query often create a bind variable , this will DRASTICALLY increase performance.
Example:
create or replace procedure dsal(p_empno in number)
as
begin
update emp
set sal=sal*2
where empno = p_empno;
commit;
end;
/
In addition create an index on the columns you intend to query.
CREATE INDEX
ix_emp_01
ON
emp (deptno)
TABLESPACE
index_tbs;
Note the TABLESPACE clause is optional