Query Performance Check - sql

My SQL script has 10 queries in a BEGIN END block. I need to check performance of each query separately for single Run.
Example:
when i run the SQL script
there are 10 queries
so need performance data for each query once it is executed
like
BEGIN
query1
performance data for query one
query2
performance data for query one
END

In SQL*Plus you can SET TIMING ON to get the elapsed time of each query. It's a simple as this:
set timing on
select * from t23
/
select * from t42
/
select * from t69
/
set timing off
But wall-clock timings are a pretty crude measure of performance. Oracle has more to offer.
If you have a friendly DBA (and aren't they all?) get them to grant you the PLUSTRACE role. With this role you can just SET AUTOTRACE ON STATISTICS and get useful information regarding resource consumption for each query. Also you can set it with EXPLAIN to get a Explain Plan after each query too.
Find out more

Related

How to execute SQL query without displaying results in Postgres

I want to execute a SQL query without displaying results. It might cause a faster query. Is it possible?
select id
from trips
order by l_pickup <->
(select l_pickup
from trips
where id =605689)
limit 100000
This query takes approximately 40 seconds.
explain (analyze) will execute the statement but will not return the results (only the execution plan).
Quote from the manual:
With this option, EXPLAIN actually executes the query, and then displays the true row counts and true run time
So you can use:
explain (analyze)
select id
from trips
order by l_pickup <-> (select l_pickup
from trips
where id =605689)
limit 100000;
The runtime reported by that is the time on the server without sending the data to the client. It will also show you what the slowest part of the statement is.

How to reliably get the SQL_ID of a query

I know this might seem a simple question - for which you might think existing answers exist. However ...
Understand that I want it be reasonable in performance, so it allows to be logged for every single query executed - or at least the big ones - without much overhead.
My first idea was this query:
select sid,serial#,prev_sql_id from v$session where audsid=userenv('sessionid');
My idea was if I run this right after my target query, I will capture the correct sql_id through prev_sql_id.
However ... I was not ... I was getting a different SQL ... apparently in between my target SELECT statement and the query for prev_sql_id, something else ran. In my case Auditing is enabled, and I was capturing the insert into the SYS.AUD$ table. No good.
As my main purpose for this attempt was to capture the execution plan for the query (as it was executed and captured by the shared pool), I thought that instead I can simply run this query:
SELECT *
FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR());
Documentation states that with NULL SQL_ID as parameter, it will run the explain plan on the most recent query ran. I was hoping that this would take care of earlier issues. However ... I got the plan for the exact same insert into the SYS.AUD$ table.
You might say, ok, then just simply put a comment in your query that allows to easily capture the SQL_ID, like in following:
SELECT /* SQL: 1234-12' */ FROM DUAL;
Then I can try to find the SQL_ID as follows:
SELECT * FROM V$SQLAREA WHERE sql_text like '%SQL: 1234-12%';
That will give me several possible candidates, of which the V$SQLAREA query itself is also included. The problem here is that I will need to randomize every query ran, which would cause me to always have a hard-parse.
I have tried other solutions where I go through history, but that comes at a much bigger cost. I have tried to search other solutions. They all seem to lag in some way.
Related Articles:
Blog: Displaying and Reading the exception plans for a SQL Statement
Oracle queries executed by a session
You could use new SQL*Plus option:
SET FEEDBACK ON SQL_ID;
SQL_ID returns the sql_id for the SQL or PL/SQL statements that are executed. The sql_id will be assigned to the predefined variable _SQL_ID. You can use this predefined variable to debug the SQL statement that was executed. The variable can be used like any other predefined variable, such as _USER and _DATE.
SQL> SET FEEDBACK ON SQL_ID
SQL> SELECT * FROM DUAL;
D
-
X
1 row selected.
SQL_ID: a5ks9fhw2v9s1
--
SQL> SELECT sql_text FROM v$sql WHERE sql_id = '&_sql_id';
SQL_TEXT
-----------------------------------------------------
SELECT * FROM DUAL
1 row selected.

How to speed up this simple query

With SourceTable having > 15MM records and Bad_Phrase having > 3K records, the following query takes almost 10 hours to run on SQL Server 2005 SP4.
Update [SourceTable]
Set Bad_Count = (Select count(*)
from Bad_Phrase
where [SourceTable].Name like '%'+Bad_Phrase.PHRASE+'%')
In English, this query is counting the number of times that any phrases listed in Bad_Phrase are a substring of the column [Name] in the SourceTable and then placing that result in the column Bad_Count.
I would like some suggestions on how to have this query run considerably faster.
For a lack of a better idea, here is one:
I don't know if SQL Server natively supports parallelizing an UPDATE statement, but you can try to do it yourself manually by partitioning the work that needs to be done.
For instance, just as an example, if you can run the following 2 update statements in parallel manually or by writing a small app, I'd be curious to see if you can bring down your total processing time.
Update [SourceTable]
Set Bad_Count=(
Select count(*)
from Bad_Phrase
where [SourceTable].Name like '%'+Bad_Phrase.PHRASE+'%'
)
where Name < 'm'
Update [SourceTable]
Set Bad_Count=(
Select count(*)
from Bad_Phrase
where [SourceTable].Name like '%'+Bad_Phrase.PHRASE+'%'
)
where Name >= 'm'
So the 1st update statement takes care of updating all the rows whose names start with the letters a-l, and the 2nd query takes care of o-z.
It's just an idea, and you can try splitting this into smaller chunks and more parallel update statements, depending on the capacity of your SQL Server machine.
Sounds like your query is scanning the whole table. Does your tables have proper indexes on them. Putting an index on columns that appear in a where clause is a good place to start. You can also try and get the cost of the query in the Sql server management studio (display estimated execution cost) or if your willing to wait (display actual execution cost) are both buttons in the query window. The cost will provide insights as to what is taking forever and possibly steer you to wright faster queries.
You are updating the table using sub query with the same table, every row update will scan the whole table and that may cause too much execution time. I think is better if you will insert first all data in the #temp table and then use the #temp table in your update statement. Or you can JOIN the Source table and Temp table as well.

oracle functional index performance

I have a table with 226 million rows that has a varchar2(2000) column. The first 10 characters are indexed using a functional index SUBSTR("txtField",1,10).
I am running a query such as this:
select count(1)
from myTable
where SUBSTR("txtField",1,10) = 'ABCDEFGHIJ';
The value does not exist in the database so the return in "0".
The explain plan shows that the operation performed is "INDEX (RANGE SCAN)" which I would assume and the cost is 4. When I run this query it takes on average 114 seconds.
If I change the query and force it to not use the index:
select count(1)
from myTable
where SUBSTR("txtField",1,9) = 'ABCDEFGHI';
The explain plan shows the operation will be a "TABLE ACCESS (FULL)" which makes sense. The cost is 629,000. When I run this query it takes on average 103 seconds.
I am trying to understand how scanning an index can take longer than reading every record in the table and performing the substr function on a field.
Followup:
There are 230M+ rows in the table and the query returns 17 rows; I selected a new value that is in the database. Initially I was executing with a value that was not in the database and returned zero rows. It seems to make no difference.
Querying for information on the index yields:
CLUSTERING_FACTOR=201808147
LEAF_BLOCKS=1131660
I am running the query with AUTOTRACE ON and the gather_plan_statistics and will add those results when they are available.
Thanks for all the suggestions.
There's a lot of possibilities.
You need to look at the actual execution plan, though.
You can run the query with the /*+ gather_plan_statistics */ hint, and then execute:
select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'));
You should also look into running a trace/tkprof to see what is actually happening - your DBA should be able to assist you with this.

Inserting into temp table from view is very slow

I am using different temp tables in my query. When I execute the query below
select * from myView
It takes only 5 seconds to execute.
but when I execute
select * into #temp from myView
It takes 50 seconds (10 times more than above query).
We migrated from SQL Server 2000 to SQL Server 2008 R2. Before in SQL 2000 both of the query takes same time but in SQL Server 2008 it takes 10 times more to execute.
Old question, but as I had a similar issue (though on SQL Server 2014) and resolved it in a way which I have not seen on any readily available resource, thought I would share in hopes of it being helpful to someone else.
I had a similar situation: a view I had created was taking 21 seconds to return its complete result set, but would take 10+ minutes (at which point I stopped the query) when I converted it into a SELECT..INTO The SELECT was a simple one, with no joins and no predicates. My hunch was that the optimizer was altering the original plan based on the additional INTO statement which did not simply pull the data set as in the first instance, then perform the INSERT, but instead altered it in a way to run very sub-optimally.
I first tried an OPENQUERY, attempting to force the result set to be generated first, then inserted into the temp table. Total running time for this method was 23 seconds, obviously much closer to the original SELECT time. Following this, I returned to my original SELECT..INTO query and added an OPTION (FORCE ORDER) hint to try to replicate the OPENQUERY behavior. This seemed to have done the trick and the time was on par with the OPENQUERY method, 23 seconds.
I don't have enough time at the moment to compare the query plans, but as a quick and dirty option if you run into this issue, you can try:
select * into #temp from myView option (force order);
Yeah, I would check the Execution Plan for your command. there may be an overhead on a sort or something.
I think, your tempdb database is in trouble. May be slow I/O, fragmentation, broken RAID etc.
Do you have an order by clause in your select statement such as select * from myView order by col1 before inserting into temp table? If there is an order by, that slows down insertion into temp table heavily. If that is the case, remove order by while the insertion happens and order after the insertion happened like
select *
into #temp
from myView
then apply order by
select * from #temp order by col1