SQL Optimization Strategy - sql

I'm a student and I'm doing my database assignment.
I want to use indexing and query optimization for my database optimization strategy.
The problem is how can I prove my strategy make a improvement? my lecture said about query optimization that I can prove by calculation, anyone got more ideas? what to calculate?
what about indexing, I need evidence to prove it. how??

In terms of evidence of optimization, you have to have instrumented code for your test cases (e.g. you can take timings accurately) and re-runnable test cases. The ideal situation for a re-runable set of test cases is to also be able to reset to a baseline database so you can guarentee the starting conditions of the data is the same per test run.
You also need to understand for each test case other more subtle factors:
Are you running against a cold procedure cache / warm procedure cache.
Are you running against a cold data cache / warm data cache.
For larger datasets, are you using the exact same table, e.g. no page splits have occured since.

I would think a before and after explain plan would go a long way towards proving an improvement.
See SQL Server Performance HERE.

Which DBMS are you using?
I suggest you take a look at what tracing options your DBMS product provides. For example, in Oracle you can use SQL Trace and parse the output using tkprof to provide you with the figures you'll need to prove that your database optimization strategy shows an improvement.

Related

What is the best way to spot the slowest block of a SQL query?

I am facing a problem that running a stored procedure is taking too much resources which sometimes causes a time out on the server (especially when the CPU usage is more than 90%).
Can anyone suggest what the best and quickest way is to spot the block which takes much resources, and also suggest a good way to solve it, please?
I am using SQL server 2005
You want to use the Query profiler. Explained here. Which will show you a graphical representation of your queries execution path, as well as which parts of it are taking the most time.
If you want to know which block is slowest, use the following
SET STATISTICS PROFILE ON
SET STATISTICS IO ON
SET STATISTICS TIME ON
When you run the SP this will display stats for each query.
If you are using the SQl Server Management studio, you can turn on the execution plan to display information about how the query will be executed by sql server including what percentage of the entire process will be taken up by each sub-process.
often when doing this, there will be a part of the query that is obviously using most of the resources.
using this informationm you can then make an informed decision about how to tune the database. (like adding an index to the offending table(s))
You don't need to use SQL Profiler to view an execution plan - just:
SET SHOWPLAN_XML ON
If there are a bunch of statements in the sproc it can be a bit convoluted to turn on the SET STATISTICS options since you have many chunks of output to associate with input.
The graphical representation of a query plan in SSMS is pretty useful since it shows you the % cost of each statement relative to the cost of the entire batch/sproc. But this is a single value, so it can be more helpful at times just to run Profiler and turn on statement level output. Profiler will give you separate IO and CPU cost for each statement if you add event SQL:StmtCompleted and columns CPU and Reads.

Stored Procedure Execution Plan - Data Manipulation

I have a stored proc that processes a large amount of data (about 5m rows in this example). The performance varies wildly. I've had the process running in as little as 15 minutes and seen it run for as long as 4 hours.
For maintenance, and in order to verify that the logic and processing is correct, we have the SP broken up into sections:
TRUNCATE and populate a work table (indexed) we can verify later with automated testing tools.
Join several tables together (including some of these work tables) to product another work table
Repeat 1 and/or 2 until a final output is produced.
My concern is that this is a single SP and so gets an execution plan when it is first run (even WITH RECOMPILE). But at that time, the work tables (permanent tables in a Work schema) are empty.
I am concerned that, regardless of the indexing scheme, the execution plan will be poor.
I am considering breaking up the SP and calling separate SPs from within it so that they could take advantage of a re-evaluated execution plan after the data in the work tables is built. I have also seen reference to using EXEC to run dynamic SQL which, obviously might get a RECOMPILE also.
I'm still trying to get SHOWPLAN permissions, so I'm flying quite blind.
Are you able to determine whether there are any locking problems? Are you running the SP in sufficiently small transactions?
Breaking it up into subprocedures should have no benefit.
Somebody should be concerned about your productivity, working without basic optimization resources. That suggests there may be other possible unseen issues as well.
Grab the free copy of "Dissecting Execution Plan" in the link below and maybe you can pick up a tip or two from it that will give you some idea of what's really going on under the hood of your SP.
http://dbalink.wordpress.com/2008/08/08/dissecting-sql-server-execution-plans-free-ebook/
Are you sure that the variability you're seeing is caused by "bad" execution plans? This may be a cause, but there may be a number of other reasons:
"other" load on the db machine
when using different data, there may be "easy" and "hard" data
issues with having to allocate more memory/file storage
...
Have you tried running the SP with the same data a few times?
Also, in order to figure out what is causing the runtime/variability, I'd try to do some detailed measuring to pin the problem down to a specific section of the code. (Easiest way would be to insert some log calls at various points in the sp). Then try to explain why that section is slow (other than "5M rows ;-)) and figure out a way to make that faster.
For now, I think there are a few questions to answer before going down the "splitting up the sp" route.
You're right it is quite difficult for you to get a clear picture of what is happening behind the scenes until you can get the "actual" execution plans from several executions of your overall process.
One point to consider perhaps. Are your work tables physical of temporary tables? If they are physical you will get a performance gain by inserting new data into a new table without an index (i.e. a heap) which you can then build an index on after all the data has been inserted.
Also, what is the purpose of your process. It sounds like you are moving quite a bit of data around, in which case you may wish to consider the use of partitioning. You can switch in and out data to your main table with relative ease.
Hope what I have detailed is clear but please feel free to pose further questions.
Cheers, John
In several cases I've seen this level of diversity of execution times / query plans comes down to statistics. I would recommend some tests running update stats against the tables you are using just before the process is run. This will both force a re-evaluation of the execution plan by SQL and, I suspect, give you more consistent results. Additionally you may do well to see if the differences in execution time correlate with re-indexing jobs by your dbas. Perhaps you could also gather some index health statistics before each run.
If not, as other answerers have suggested, you are more likely suffering from locking and/or contention issues.
Good luck with it.
The only thing I can think that an execution plan would do wrong when there's no data is err on the side of using a table scan instead of an index, since table scans are super fast when the whole table will fit into memory. Are there other negatives you're actually observing or are sure are happening because there's no data when an execution plan is created?
You can force usage of indexes in your query...
Seems to me like you might be going down the wrong path.
Is this an infeed or outfeed of some sort or are you creating a report? If it is a feed, I would suggest that you change the process to use SSIS which should be able to move 5 million records very fast.

How do I clear oracle execution plan cache for benchmarking?

On oracle 10gr2, I have several sql queries that I am comparing performance. But after their first run, the v$sql table has the execution plan stored for caching, so for one of the queries I go from 28 seconds on first run to .5 seconds after.
I've tried
ALTER SYSTEM FLUSH BUFFER_CACHE;
After running this, the query consistently runs at 5 seconds, which I do not believe is accurate.
Thought maybe deleting the line item itself from the cache:
delete from v$sql where sql_text like 'select * from....
but I get an error about not being able to delete from view.
Peter gave you the answer to the question you asked.
alter system flush shared_pool;
That is the statement you would use to "delete prepared statements from the cache".
(Prepared statements aren't the only objects flushed from the shared pool, the statement does more than that.)
As I indicated in my earlier comment (on your question), v$sql is not a table. It's a dynamic performance view, a convenient table-like representation of Oracle's internal memory structures. You only have SELECT privilege on the dynamic performance views, you can't delete rows from them.
flush the shared pool and buffer cache?
The following doesn't answer your question directly. Instead, it answers a fundamentally different (and maybe more important) question:
Should we normally flush the shared pool and/or the buffer cache to measure the performance of a query?
In short, the answer is no.
I think Tom Kyte addresses this pretty well:
http://www.oracle.com/technology/oramag/oracle/03-jul/o43asktom.html
http://www.oracle.com/technetwork/issue-archive/o43asktom-094944.html
<excerpt>
Actually, it is important that a tuning tool not do that. It is important to run the test, ignore the results, and then run it two or three times and average out those results. In the real world, the buffer cache will never be devoid of results. Never. When you tune, your goal is to reduce the logical I/O (LIO), because then the physical I/O (PIO) will take care of itself.
Consider this: Flushing the shared pool and buffer cache is even more artificial than not flushing them. Most people seem skeptical of this, I suspect, because it flies in the face of conventional wisdom. I'll show you how to do this, but not so you can use it for testing. Rather, I'll use it to demonstrate why it is an exercise in futility and totally artificial (and therefore leads to wrong assumptions). I've just started my PC, and I've run this query against a big table. I "flush" the buffer cache and run it again:
</excerpt>
I think Tom Kyte is exactly right. In terms of addressing the performance issue, I don't think that "clearing the oracle execution plan cache" is normally a step for reliable benchmarking.
Let's address the concern about performance.
You tell us that you've observed that the first execution of a query takes significantly longer (~28 seconds) compared to subsequent executions (~5 seconds), even when flushing (all of the index and data blocks from) the buffer cache.
To me, that suggests that the hard parse is doing some heavy lifting. It's either a lot of work, or its encountering a lot of waits. This can be investigated and tuned.
I'm wondering if perhaps statistics are non-existent, and the optimizer is spending a lot of time gathering statistics before it prepares a query plan. That's one of the first things I would check, that statistics are collected on all of the referenced tables, indexes and indexed columns.
If your query joins a large number of tables, the CBO may be considering a huge number of permutations for join order.
A discussion of Oracle tracing is beyond the scope of this answer, but it's the next step.
I'm thinking you are probably going to want to trace events 10053 and 10046.
Here's a link to an "event 10053" discussion by Tom Kyte you may find useful:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:63445044804318
tangentially related anecdotal story re: hard parse performance
A few years back, I did see one query that had elapsed times in terms of MINUTES on first execution, subsequent executions in terms of seconds. What we found was that vast majority of the time for the first execution time was spent on the hard parse.
This problem query was written by a CrystalReports developer who innocently (naively?) joined two humongous reporting views.
One of the views was a join of 62 tables, the other view was a join of 42 tables.
The query used Cost Based Optimizer. Tracing revealed that it wasn't wait time, it was all CPU time spent evaluating possible join paths.
Each of the vendor supplied "reporting" views wasn't too bad by itself, but when two of them were joined, it was agonizingly slow. I believe the problem was the vast number of join permutations that the optimizer was considering. There is an instance parameter that limits the number of permutations considered by the optimizer, but our fix was to re-write the query. The improved query only joined the dozen or so tables that were actually needed by the query.
(The initial immediate short-term "band aid" fix was to schedule a run of the query earlier in the morning, before report generation task ran. That made the report generation "faster", because the report generation run made use of the already prepared statement in the shared pool, avoiding the hard parse.
The band aid fix wasn't a real solution, it just moved the problem to a preliminary execution of the query, when the long execution time wasn't noticed.
Our next step would have probably been to implement a "stored outline" for the query, to get a stable query plan.
Of course, statement reuse (avoiding the hard parse, using bind variables) is the normative pattern in Oracle. It mproves performance, scalability, yada, yada, yada.
This anecdotal incident may be entirely different than the problem you are observing.
HTH
It's been a while since I worked with Oracle, but I believe execution plans are cached in the shared pool. Try this:
alter system flush shared_pool;
The buffer cache is where Oracle stores recently used data in order to minimize disk io.
We've been doing a lot of work lately with performance tuning queries, and one culprit for inconsistent query performance is the file system cache that Oracle is sitting on.
It's possible that while you're flushing the Oracle cache the file system still has the data your query is asking for meaning that the query will still return fast.
Unfortunately I don't know how to clear the file system cache - I just use a very helpful script from our very helpful sysadmins.
FIND ADDRESS AND HASH_VALUE OF SQL_ID
select address,hash_value,inst_id,users_executing,sql_text from gv$sqlarea where sql_id ='7hu3x8buhhn18';
PURGE THE PLAN FROM SHARED POOL
exec sys.dbms_shared_pool.purge('0000002E052A6990,4110962728','c');
VERIFY
select address,hash_value,inst_id,users_executing,sql_text from gv$sqlarea where sql_id ='7hu3x8buhhn18';

What is the purpose for using OPTION(MAXDOP 1) in SQL Server?

I have never clearly understood the usage of MAXDOP. I do know that it makes the query faster and that it is the last item that I can use for Query Optimization.
However, my question is, when and where it is best suited to use in a query?
As Kaboing mentioned, MAXDOP(n) actually controls the number of CPU cores that are being used in the query processor.
On a completely idle system, SQL Server will attempt to pull the tables into memory as quickly as possible and join between them in memory. It could be that, in your case, it's best to do this with a single CPU. This might have the same effect as using OPTION (FORCE ORDER) which forces the query optimizer to use the order of joins that you have specified. IN some cases, I have seen OPTION (FORCE PLAN) reduce a query from 26 seconds to 1 second of execution time.
Books Online goes on to say that possible values for MAXDOP are:
0 - Uses the actual number of available CPUs depending on the current system workload. This is the default value and recommended setting.
1 - Suppresses parallel plan generation. The operation will be executed serially.
2-64 - Limits the number of processors to the specified value. Fewer processors may be used depending on the current workload. If a value larger than the number of available CPUs is specified, the actual number of available CPUs is used.
I'm not sure what the best usage of MAXDOP is, however I would take a guess and say that if you have a table with 8 partitions on it, you would want to specify MAXDOP(8) due to I/O limitations, but I could be wrong.
Here are a few quick links I found about MAXDOP:
Books Online: Degree of Parallelism
General guidelines to use to configure the MAXDOP option
This is a general rambling on Parallelism in SQL Server, it might not answer your question directly.
From Books Online, on MAXDOP:
Sets the maximum number of processors
the query processor can use to execute
a single index statement. Fewer
processors may be used depending on
the current system workload.
See Rickie Lee's blog on parallelism and CXPACKET wait type. It's quite interesting.
Generally, in an OLTP database, my opinion is that if a query is so costly it needs to be executed on several processors, the query needs to be re-written into something more efficient.
Why you get better results adding MAXDOP(1)? Hard to tell without the actual execution plans, but it might be so simple as that the execution plan is totally different that without the OPTION, for instance using a different index (or more likely) JOINing differently, using MERGE or HASH joins.
As something of an aside, MAXDOP can apparently be used as a workaround to a potentially nasty bug:
Returned identity values not always correct
There are a couple of parallization bugs in SQL server with abnormal input. OPTION(MAXDOP 1) will sidestep them.
EDIT: Old. My testing was done largely on SQL 2005. Most of these seem to not exist anymore, but every once in awhile we question the assumption when SQL 2014 does something dumb and we go back to the old way and it works. We never managed to demonstrate that it wasn't just a bad plan generation on more recent cases though since SQL server can be relied on to get the old way right in newer versions. Since all cases were IO bound queries MAXDOP 1 doesn't hurt.
Adding my two cents, based on a performance issue I observed.
If simple queries are getting parellelized unnecessarily, it can bring more problems than solving one. However, before adding MAXDOP into the query as "knee-jerk" fix, there are some server settings to check.
In Jeremiah Peschka - Five SQL Server Settings to Change, MAXDOP and "COST THRESHOLD FOR PARALLELISM" (CTFP) are mentioned as important settings to check.
Note: Paul White mentioned max server memory aslo as a setting to check, in a response to Performance problem after migration from SQL Server 2005 to 2012. A good kb article to read is Using large amounts of memory can result in an inefficient plan in SQL Server
Jonathan Kehayias - Tuning ‘cost threshold for parallelism’ from the Plan Cache helps to find out good value for CTFP.
Why is cost threshold for parallelism ignored?
Aaron Bertrand - Six reasons you should be nervous about parallelism has a discussion about some scenario where MAXDOP is the solution.
Parallelism-Inhibiting Components are mentioned in Paul White - Forcing a Parallel Query Execution Plan

Oracle 10 optimizer from RULE to COST: why?

Oracle decided to dismiss the rule-based optimizer from version 10g, leaving the cost-based one as the only choice.
I think that a rule-based optimizer has the unvaluable positive side of being always predictable. I've seen instead Oracle 10g changing execution plans from night to day, leading to turtle-like performances.
Which could be the rationale behind this change?
Because everything you can do with RBO, can be done with CBO.
The CBO can be rule-based too — more than that, you may decide the "rules" yourself.
To create your own "rules", you hint your query or do a CREATE OUTLINE which will hint it for you. As a result, your execution plan is stable.
The outlines are stored in a system schema called OUTLN, they are editable.
As for me, I always supply hints to my queries running in a production database.
The RBO is often predicatably bad as well as predictably good. It also doesn't support partitioning and some other database features. The CBO is much better, and as Quassnoi says plan stability is a feature of the CBO also.
The RBO has been deprecated for a long time; it was really just retained for backwards compatibility with legacy applications. Oracle have been announcing the demise of the RBO since (IIRC) version 8, which came out about 10 years ago.
The RBO was deterministic, but not all that clever. Oracle was originally designed before cost-based optimisers were even available, let alone a mature technology. The RBO has been frozen for a long time and does not support a lot of features of modern Oracle engines.
Cost-based optimisation is much smarter. However, if you had queries optimised for the RBO, they might not play nicely with the CBO. You will probably have to re-write or hint your queries appropriately to tune them for the CBO. There is also a facility to specify a query plan and override the CBO with that plan. This will give you deterministic query execution with stable plans.
(I am not a DBA.)
My understanding is that Oracle has been moving away from the RBO for a long time in favor of CBO. It seems useful to me to stop supporting a feature that is no longer in active development (given a long enough depreciation period) so that everyone is using the most effective features.
It's interesting that you called predictability an "unvaluable" effect of using the rule-based optimizer. It seems like when the data changes to make an execution plan sub-optimal it would be best to switch to a new one. Only in the case you alluded to where the optimizer flip-flops between two execution plan would there be a problem with picking the best plan for the the data you are actually querying. I'm not sure what advantage predictability is in more normal situation.
Ending support of the out-dated optimizer ought to free up support for the newer optimizer.
The reason they moved to cost-based optimization is that it can perform better since its based on analyzing statistical information that the rule-based optimizer does not have.
To make the CBO work better, its important to understand the role that statistics gathering plays in execution plan changes which directly affect performance. For one thing, running statistics more or less frequently could help you. Here is a good article about the CBO and statistics:
Optimizing Oracle Optimizer Statistics
I think you should do rule based programming. Don't think about the situation, follow a list of inviolate rules, no matter what the situation, no matter what you think is the better way, if the rules say use a FOR LOOP in case X then you have to use a loop, even if you know if there will only be 1, loop from 1 to 1.
Stipulate:
Every query has a best plan.
Every query optimizer will determine that plan x% of the time.
The RBO had nowhere else to go, it's percent accuracy is lower than the CBO to be sure, but it was never going to get any better. It was limited like any rule based system.