PL\SQL Select Statement Performance Tuning - sql

I try to improve the performance of the select statement however I was unable to solve the problem as the UNION-ALL and HASH UNIQUE which spent a lot of time while execute this query.Is there any ideas on how to improve the select statement so that it can execute faster? The below is the explain plan of the query.

Related

How to ensure code and SSMS use same execution plan for parameterized query

I'm debugging a parameter sniffing issue in SQL Server 2014.
My application performs a query that along the lines of: (drastically reduced for simplicity)
SELECT somefields FROM MyTable Where anotherField = #myParameter
Where #myParameter is added through the command.Parameters.Add() method, this query takes too long to run and times out.
When trying to replicate the issue, I went to SSMS and tried the following query:
DECLARE #myParameter int = 10 --same value as in the code
SELECT somefields FROM MyTable Where anotherField = #myParameter
And the query runs fast with a different execution plan from the code.
I want to try some tweaks to the query to see if I can improve my code's execution plan, but I need a ways to make SSRS behave the way my code does.
How can I write my query in SSRS, so that SQL Server calculates the execution plan in the same way it does for my application?

what make insert into select sql statement very slowly(no resultset return)

execute very slowly on below sql statement, env: oracle 9. Please advise the cause or hints to debug. Thanks a lot .
INSERT INTO tmp_table (col1,col2,col3,col4,col5,col6,col7,col8,col9)
select * from my_view where col1 = 1;
How far i tested: execute the select sub-statement cost 3 sec to get 1 record returned while endless executing but no resultset returned when execute the entire statement, like hanging.
When a query appears to execute quickly, in the sense that the last row is returned quickly rather than just the first row, but it runs slowly when used in an insert statement, the usual suspects are:
A change in the execution plan, driven by the optimiser believing that an all_rows goal is now appropriate where previously a first_rows goal was used. Check the execution plan for changes.
Some action being executed as a result of the insert. Slow triggers on the target table, or materialised view logging, are possible causes. This would not be the case when running "create table ... as select ..".

How to investigate why sql script that runs every day taking 2 min is taking 2 hours?

My colleague asked me a question today
"I have a SQL script containing 4 select queries. I have been using it
daily for more than a month but yesterday same query took 2 hours and
I had to aborting execution."
His questions were
Q1. What happened to this script on that day?
Q2. How can I check of those 4 queries which of them got executed and which one culprit for abort?
My answer to Q2 was to use SQL profiler and check trace for Sql statement event.
For Q1:
I asked few questions to him
What was the volume of data on that day?
His answer: No change
Was there any change in indexing i.e. someone might have dropped indexing? His answer: No Change
Did it trapped in a deadlock by checking data management views to track it? His answer: Not in a deadlock
What else do you think I should have considered to ask? Can there be any other reason for this?
Since I didn't see the query so I can't paste it here.
Things to look at (SQL Server):
Statistics out of date? Has somebody run a large bulk insert operation? Run update statistics.
Change in indexing? If so, if it's a stored procedure, check the execution plan and/or recompile it...then check the execution plan again and correct any problems.
SQL Server caches execution plans. If you query is parameterized or uses if-then-else logic, the first time it runs, if the parameters are an edge case, the execution plan cached can work poorly for ordinary executions. You can read more about this...ah...feature at:
http://www.solidq.com/sqj/Pages/2011-April-Issue/Parameter-Sniffing-Problem-with-SQL-Server-Stored-Procedures.aspx
http://social.msdn.microsoft.com/Forums/en-US/transactsql/thread/88ff51a4-bfea-404c-a828-d50d25fa0f59
SQL poor stored procedure execution plan performance - parameter sniffing
In this case my approach would be:
Here is the case, he had to abort the execution because the query was taking more than expected time and finally it didn't complete. As per my understanding, there might be any blocking session/uncommitted transaction for the table you are querying(executed by any different user on the day). Since you were executing 'select' statement and as I know, 'select' statements used to wait for any other transactions to get completed(if the transaction executed before 'select'). Your query might be waiting for any other transaction to get completed(the transaction might have update/insert or delete). Check for the blocking session if any.
For a single session sql server switches between threads. You need to check either the thread containing your query is in 'suspended'/'running' or 'runnable' mode. In your case your query might be in suspended mode. Investigate in which mode the query is and why.
Next thing is fragmentation. Best practice is to have a index rebuild/reorganize job configured in your environment which helps to remove unnecessary fragmentation. So that your query will need to scan less amount of pages while returning data. Otherwise , your query will be taking more and more time for returning data. Configure the job and execute the job at least once in a week. It will keep refreshing your indexes and pages.
Use EXPLAIN to analyze the four queries. That will tell you how the optimizer will be using indexes (or not using indexes).
Add queries to the script to SELECT NOW() in between the statements, so you can measure how long each query took. You can also have MySQL do arithmetic for you, by storing NOW() into a session variable and then use TIMEDIFF() to calculate the difference between start and finish of the statement.
SELECT NOW() INTO #start;
SELECT SLEEP(5); -- or whatever query you need to measure
SELECT TIMEDIFF(#start, NOW());
#Scott suggests in his comment, use the slow query log to measure the time for long-running queries.
Once you have identified the long-running query, use the query PROFILER while executing the query to see exactly where it's spending its time.

How to make PostgresQL optimizer to build execution plan AFTER binding parameters?

I'm developing Pg/PLSQL function for PostgresQL 9.1. When I use variables in a SQL query, optimizer build a bad execution plan. But if I replace a variable by its value the plan is ok.
For instance:
v_param := 100;
select count(*)
into result
from <some tables>
where <some conditions>
and id = v_param
performed in 3s
and
select count(*)
into result
from <some tables>
where <some conditions>
and id = 100
performed in 300ms
In first case optimizer generate a fixed plan for any value of v_param.
In second case optimizer generate a plan based on specified value and it's significantly more efficient despite not using plan caching.
Is it possible to make optimizer to generate plan without dynamic binding and generate a plan every time when I execute the query?
This has been dramatically improved by Tom Lane in the just-released PostgreSQL 9.2; see What's new in PostgreSQL 9.2 particularly:
Prepared statements used to be optimized once, without any knowledge
of the parameters' values. With 9.2, the planner will use specific
plans regarding to the parameters sent (the query will be planned at
execution), except if the query is executed several times and the
planner decides that the generic plan is not too much more expensive
than the specific plans.
This has been a long-standing and painful wart that's previously required SET enable_... params, the use of wrapper functions using EXECUTE, or other ugly hacks. Now it should "just work".
Upgrade.
For anyone else reading this, you can tell if this problem is biting you because auto_explain plans of parameterised / prepared queries will differ from those you get when you explain the query yourself. To verify, try PREPARE ... SELECT then EXPLAIN EXECUTE and see if you get a different plan to EXPLAIN SELECT.
See also this prior answer.
Dynamic queries doesn't use cached plans - so you can use EXECUTE USING statement in 9.1 and older. 9.2 should to work without this workaround as Craig wrote.
v_param := 100;
EXECUTE 'select count(*) into result from <some tables> where <some conditions>
and id = $1' USING v_param;

Can you force Linq2SQL to NOT use sp_executesql?

So I write a Linq query and it takes 16 seconds to run. Decide to see what the query plan is, so I get that out of Linq to SQL Profiler and the query only takes 2 seconds to run. sigh
After spending most of the day poking at things and finally getting around to using SQL Server Profiler I see that Linq2SQL is using sp_executesql to run the query. I understand that it's supposed to improve performance because it's more likely to re-use the execution plan... but it seems to have chosen a horrible execution plan to use.
The weirder part is that it only gets slow if I join a specific table, and I have no idea why that specific table is causing a problem.
EDIT Just to clarify the actual issue here:
It's actually getting to different queries. One is, essentially,
SELECT col1, col2, ... FROM table1, table2 WHERE table1.val IN (1234, 2343, 2435)
The other is
EXEC sp_executesql 'SELECT col1, col2, ... FROM table1, table2 WHERE table1.val IN (#p1, #p2, #p3)',
N'#p0 int,#p1 int,#p2 int,#p3 int',
#p0=1234, #p1=2343, #p3=2435
Your problem doesn't stem from the use of sp_executesql, and so circumventing it (which you can't) will not solve your problems. I suggest you read Erland Sommarskog's excellent article:
Slow in the Application, Fast in SSMS?
Understanding Performance Mysteries
This will give you a deep understanding of why you're getting a performance difference, how to diagnose and consistently reproduce it, and finally, how to solve it.
If the exact same query is fast from one application or server, but slow from another, it's usually all about execution plans. An execution plan is the blueprint the server uses to run the query. The plan is supposed to be created once, and then reused for all queries which differ only in parameter values.
Different execution plans can lead to wildly difference performance, a factor of 100 is not at all unusual. As a first step, examine if the execution plans are different. The profiler event performance -> showplan xml logs the plan.
If the plan is different, one possible cause can be the session options, like ansi nulls:
SET ANSI_NULLS
Another possibility is a different login (the blueprint contains security information, so each security context has its own set of cached execution plans.)
The easiest way to clear the plan cache is to restart the SQL Server service. There's also an advanced command to clear the entire query plan cache:
DBCC FREEPROCCACHE
P.S. If you have a stored procedure that performs differently based on the value of parameters, it's worth to check out parameter sniffing. But since you're copying the exact same procedure from the profiler, I assume the parameters are identical for both the slow and the fast invocations.
To answer your question....
NO, you can't...