Hi I am new to Greenplum database. I got to know that the default optimizer is legacy and to activate Pivotal optimizer we should enable "set optimizer = on".
I want to know about following:
What is the main difference between these two optimizer's
For what type of queries we should enable Pivotal optimizer for better
performance.
Anuraag.
Setting optimizer to "on" enables a set of modifications to the original Postgres optimizer to better handle things like queries on very large partitioned tables, subqueries, and CTE SQL (WITH statements). There are other ongoing modifications to make the optimizer code more modular and more efficient on all types of SQL queries, but that is where the focus was originally. I am not on the optimizer team (Pivotal Data Field engineer here) so there are probably others who can give you more in depth answers on this topic than I can.
As far as which queries benefit most, the best answer would be: "it depends" :). Generally, very large partitioned table queries will be handled more efficiently and faster with optimizer = on. Same with CTE queries and queries with sub-selects in them. I have also seen some more standard star schema-type queries run faster with optimizer = on.
In either case, the optimizer depends on very good statistics in the database, so you need to make sure ANALYZE is run after large loads or deletes/truncates.
Your best bet is to run and time your queries with optimizer on and off (it can be set at the session level). The size of your dataset and your database schema structure may show generally faster times with optimizer either on or off, so I would go with whichever setting works best for your particular situation. I work with a lot of Greenplum customers. Some have optimizer set to default on, some set to off. Find the default setting that works best for the bulk of your queries, and use the opposite setting in cases where a query is running "slowly" and see if you get better results.
I hope this answers your question.
Jim
For partitioned table, make sure you run analyze root partition since PQO uses stats on the root partition and not the leaf partitions like Planner.
Related
moin-moin,
I have a join over some tables, and I want to give the db2-database a hint, which index i want her to use. I know, this may result in a slow query, but I have a production and a test database, and I want the same behaviour in both databases (even if in one db the amount of data is significantly different or what state the (index-)cache has.
Is this possible (and how)? I did not find anything in the online manual, which could mean, I had the wron searching criteria.
Thanks a million.
This is not something that is commonly done with DB2. However, you can use selectivity. It should still be around in present versions. Adding selectivity clauses to queries will affect the decisions made by the query optimizer.
Also what Gilbert Le Blanc noted above will work. You can UPDATE the syscat.tables colums and fool the DB2 to optimize the queries for non-existent amounts of data in the rows. Also the rest of your DB / DBM CFG should match (ie. the calculated disk and cpu speeds, memory usage related settings etc) because in some situations they might also matter to some degree.
You can influence the optimizer via a Profile:
http://www.ibm.com/developerworks/data/library/techarticle/dm-1202storedprocedure/index.html
It was recently asked here: Is it possible to replace NL join with HS join in sql
However, I haven't heard about the selectivity clause, and I think you should try first this option, before create a profile. But you should do this just after having tried other options. Follow the steps as indicated in the DeveloperWorks tutorial before influence the optimizer:
Experiment with different SQL optimization classes. The default optimization class is controlled by the DFT_QUERYOPT parameter in the database configuration file.
Attempt to resolve any performance problems by ensuring that proper database statistics have been collected. The more detailed the statistics, the better the optimizer can perform. (See RUNSTATS in the DB2 Command Reference).
If the poor access plan is the result of rapidly changing characteristics of the table (i.e. grows very quickly such that statistics get out of date quickly), try marking the table as VOLATILE using the ALTER TABLE command.
Try explaining the query using literal values instead of parameter markers in your predicates. If you are getting different access plans when using parameter markers, it will help you understand the nature of the performance problem better. You may find that using literals in your application will yield a better plan (and therefore better performance) at the cost of SQL compilation overhead.
Try using DB2’s index advisor (db2advis) to see if there are any useful indexes which you may have overlooked.
One of my jobs it to maintain our database, usually we have troubles with lack of performance while getting reports and working whit that base.
When I start looking at queries which our ERP sending to database I see a lot of totally needlessly subselect queries inside main queries.
As I am not member of developers which is creator of program we using, they do not like much when I criticize they code and job. Let say they do not taking my review as serious statements.
So I asking you few questions about subselect in SQL
Does subselect is taking a lot of more time then left outer joins?
Does exists any blog, article or anything where I subselect is recommended not to use ?
How I can prove that if we avoid subselesct in query that query is going to be faster ?
Our database server is MSSQL2005
"Show, Don't Tell" - Examine and compare the query plans of the queries identified using SQL Profiler. Particularly look out for table scans and bookmark lookups (you want to see index seeks as often as possible). The 'goodness of fit' of query plans depends on up-to-date statistics, what indexes are defined, the holistic query workload.
Execution Plan Basics
Understanding More Complex Query Plans
Using SQL Server Profiler (2005 Version)
Run the queries in SQL Server Management Studio (SSMS) and turn on Query->Include Actual Execution Plan (CTRL+M)
Think yourself lucky they're only subselects (which in some cases the optimiser will produce equivalent 'join plans') and not correlated sub-queries!
Identify a query that is performing a high number of logical reads, re-write it using your preferred technique and then show how few logicals reads it does by comparison.
Here's a tip. To get the total number of logical reads performed, wrap a query in question with:
SET STATISTICS IO ON
GO
-- Run your query here
SET STATISTICS IO OFF
GO
Run your query, and switch to the messages tab in the results pane.
If you are interested in learning more, there is no better book than SQL Server 2008 Query Performance Tuning Distilled, which covers the essential techniques for monitoring, interpreting and fixing performance issues.
One thing you can do is to load SQL Profiler and show them the cost (in terms of CPU cycles, reads and writes) of the sub-queries. It's tough to argue with cold, hard statistics.
I would also check the query plan for these queries to make sure appropriate indexes are being used, and table/index scans are being held to a minimum.
In general, I wouldn't say sub-queries are bad, if used correctly and the appropriate indexes are in place.
I'm not very familiar with MSSQL, as we are using postrgesql in most of our applications. However there should exist something like "EXPLAIN" which shows you the execution plan for the query. There you should be able to see the various steps that a query will produce in order to retrieve the needed data.
If you see there a lot of table scans or loop join without any index usage it is definitely a hint for a slow query execution. With such a tool you should be able to compare the two queries (one with the join, the other without)
It is difficult to state which is the better ways, because it really highly depends on the indexes the optimizer can take in the various cases and depending on the DBMS the optimizer may be able to implicitly rewrite a subquery-query into a join-query and execute it.
If you really want to show which is better you have to execute both and measure the time, cpu-usage and so on.
UPDATE:
Probably it is this one for MSSQL -->QueryPlan
From my own experience both methods can be valid, as for example an EXISTS subselect can avoid a lot of treatment with an early break.
Buts most of the time queries with a lot of subselect are done by devs which do not really understand SQL and use their classic-procedural-programmer way of thinking on queries. Then they don't even think about joins, and makes some awfull queries. So I prefer joins, and I always check subqueries. To be completly honnest I track slow queries, and my first try on slow queries containing subselects is trying to do joins. Works a lot of time.
But there's no rules which can establish that subselect are bad or slower than joins, it's just that bad sql programmer often do subselects :-)
Does subselect is taking a lot of more time then left outer joins?
This depends on the subselect and left outer joins.
Generally, this construct:
SELECT *
FROM mytable
WHERE mycol NOT IN
(
SELECT othercol
FROM othertable
)
is more efficient than this:
SELECT m.*
FROM mytable m
LEFT JOIN
othertable o
ON o.othercol = m.mycol
WHERE o.othercol IS NULL
See here:
NOT IN vs. NOT EXISTS vs. LEFT JOIN / IS NULL: SQL Server
Does exists any blog, article or anything where subselect is recommended not to use ?
I would steer clear of the blogs which blindly recommend to avoid subselects.
They are implemented for a reason and, believe it or not, the developers have put some effort into optimizing them.
How I can prove that if we avoid subselesct in query that query is going to be faster ?
Write a query without the subselects which runs faster.
If you post your query here we possibly will be able to improve it. However, a version with the subselects may turn out to be faster.
Try rewriting some of the queries to elminate the sub-select and compare runtimes.
Share and enjoy.
I am not so proficient in TSql as of now (writing since last 4/5 months) but I have written many queries. Although I have given the outputs, sometimes I feel that the queries are not so optimized. I searched in google and found lot of stuffs about query optimization, and they ask to look into the query plan(actual & estimated) for the performance improvisation.
As I already said that I am very new to writing queries so it is becoming difficult for me to grasp those solutions. But I need to learn query optimization.
Can any body help me out initially how and where should I start from?
Searching in internet reveals that, SEEK is better than SCAN(May it be index or Table). How can I achieve a seek over a scan?
Then they says that ORDER BY clause i.e. sorting is more costly. Then what is the work around? How can I write effective query?
Can anybody explain me, with some examples, which kind of query is better over what and in what situation?
Edited
Dear All,
You all have answered and that will help me a lot. But what I intend to say is that, you all have practised a lot for becoming an expert. Once upon a time, I guess you all were like what I am now.So my humble request is how you all started for writing optimised query.I know that patience is needed and I will devote that.
I apologise for any wrong statement of mine.
Thanks in advance
Articles discussing Query Optimization issues are often very factual and useful, but as you found out they can be hard to follow. It is a bit like when someone is trying to learn the basics rules of baseball, and all the sports commentary he/she finds on the subject is rife with acronyms and strategic details about the benefits of sacrificing someone at bat, and other "inside baseball" trivia...
So you need to learn the basics first:
the structure(s) of the database storage
indexes' structure, the clustered and non clustered kind, the multi column indexes
the concept of covering a query
the selectivity of a particular column
the disadvantage of indexes when it comes to CRUD operations
the basic subtasks/strategies of a query: table or index scan, index seek, sorting, inner-outer merge etc.
the log file, the data recovery model.
The following links apply to MS SQL Server. If that is not the DBMS you are using you can try and find similar material for the system of your choice. In fact, so long as you realize that the implementation may vary, it may be useful to peruse the MS documention.
MS SQL storage structures
MS SQL pages and extents
Then as you started doing, learn the way to read query plans (even if not in fully understand at first), and all this should bring you to a level where you start to make sense of the more advanced books or articles on the topic. I do not know of tutorials for Query Plans on the Internet (though I'm quite sure they exist...), but the following methodology may be of use: Start with simple queries, review the query plan (if possible in a graphic fashion), start recognizing the most common elements: Table Scan, Index Seek, Sort, nested loops... Read the detailed properties of these instances: estimated nb of rows, cost percentage etc. When you find a new element that you do not know/understand, use this keyword to find details on the internet. Also: experiment a lot.
Finally you should remember that while the way the query is written and the set of indexes etc. provided cover a great part of optimization needs, there are other sources of optmization, for example the way hardware is put to use (a basic example is how by having the data file and the log file on separate physical disks, we can greatly improve CRUD performance).
Searching in internet reveals that,
SEEK is better than SCAN(May it be
index or Table). How can I achieve a
seek over a scan?
Add the necessary index -- if the incremental costs on INSERT and UPDATE (and extra storage) are an overall win to speed up the seeking in your queries.
Then they says that ORDER BY clause
i.e. sorting is more costly. Then what
is the work around? How can I write
effective query?
Add the necessary index -- if the incremental costs on INSERT and UPDATE (and extra storage) are an overall win to speed up the ordering in your queries.
Can anybody explain me, with some
examples, which kind of query is
better over what and in what
situation?
You already pointed out a couple of specific questions -- and the answers were nearly identical. What good would it do to add another six?
Run benchmark queries over representative artificial data sets (must resemble what you're planning to have in production -- if you have small toy-sized tables the query plans will not be representative nor meaningful), try with and without the index that appear to be suggested by the different query plans, measure performance; rinse, repeat.
It takes 10,000 hours of practice to be good at anything. Optimizing DB schemas, indices, queries, etc, is no exception;-).
ORDER BY is a necessary evil - there's no way around it.
Refer to this question for solving index seek, scan and bookmark/key lookups. And this site is very good for optimization techniques...
Always ensure that you have indexes on your tables. Not too many and not too few.
Using sql server 2005, apply included columns in these indexes, they help for lookups.
Order by is costly, if not required, why sort a data table if it is not required.
Always filter as early as possible, if you reduce the number of joins, function calls etc, as early as possible, you reduce time taken over all
avoid cursors if you can
use temp tables/ table vars for
filtering where possible
remote queries will cost you
queries with sub
selects in the where clause can be
hurtfull
table functions can be costly if not
filtered
as always, there is no hard rule, and things should be taken on a per query basis.
Always create the query as understandle/readable as possible, and optimize when needed.
EDIT to comment question:
Temp tables can be used when you require to add indexes on the temp table (you cannot add indexes on var tables, except the pk). I mostly use var tables when i can, and only have the required fields in them as such
DECLARE #Table TABLE(
FundID PRIMARY KEY
)
i would use this to fill my fund group ids instead of having a join to tables that are less optimized.
I read a couple of articles the other day and to my surprise found that var tables are actually created in the tempdb
link text
Also, i have heard, and found that table UDFs can seems like a "black box" to the query planner. Once again, we tend to move the selects from the table functions into table vars, and then join on these var tables. But as mentioned earlier, write the code first, then optimize when you find bottle necks.
I have found that CTEs can be usefull, but also, that when the level of recursion grows, that it can be very slow...
How does one performance tune a SQL Query?
What tricks/tools/concepts can be used to change the performance of a SQL Query?
How can the benefits be Quantified?
What does one need to be careful of?
What tricks/tools/concepts can be used to change the performance of a SQL Query?
Using Indexes? How do they work in practice?
Normalised vs Denormalised Data? What are the performance vs design/maintenance trade offs?
Pre-processed intermediate tables? Created with triggers or batch jobs?
Restructure the query to use Temp Tables, Sub Queries, etc?
Separate complex queries into multiples and UNION the results?
Anything else?
How can performance be Quantified?
Reads?
CPU Time?
"% Query Cost" when different versions run together?
Anything else?
What does one need to be careful of?
Time to generate Execution Plans? (Stored Procs vs Inline Queries)
Stored Procs being forced to recompile
Testing on small data sets (Do the queries scale linearly, or square law, etc?)
Results of previous runs being cached
Optimising "normal case", but harming "worst case"
What is "Parameter Sniffing"?
Anything else?
Note to moderators:
This is a huge question, should I have split it up in to multiple questions?
Note To Responders:
Because this is a huge question please reference other questions/answers/articles rather than writing lengthy explanations.
I really like the book "Professional SQL Server 2005 Performance Tuning" to answer this. It's Wiley/Wrox, and no, I'm not an author, heh. But it explains a lot of the things you ask for here, plus hardware issues.
But yes, this question is way, way beyond the scope of something that can be answered in a comment box like this one.
Writing sargable queries is one of the things needed, if you don't write sargable queries then the optimizer can't take advantage of the indexes. Here is one example Only In A Database Can You Get 1000% + Improvement By Changing A Few Lines Of Code this query went from over 24 hours to 36 seconds
Of course you also need to know the difference between these 3 join
loop join,
hash join,
merge join
see here: http://msdn.microsoft.com/en-us/library/ms173815.aspx
Here some basic steps that need to follow:
Define business requirements first
SELECT fields instead of using SELECT *
Avoid SELECT DISTINCT
Create joins with INNER JOIN (not WHERE)
Use WHERE instead of HAVING to define filters
Proper indexing
Here are some basic steps which we can follow to increase the performance:
Check for indexes in pk and fk for the tables involved if it is still taking time index the columns present in the query.
All indexes are modified after every operation so kindly do not index each and every column
Before batch insertion delete the indexes and then recreate the indexes.
Select sparingly
Use if exists instead of count
Before accusing dba first check network connections
What techniques can be applied effectively to improve the performance of SQL queries? Are there any general rules that apply?
Use primary keys
Avoid select *
Be as specific as you can when building your conditional statements
De-normalisation can often be more efficient
Table variables and temporary tables (where available) will often be better than using a large source table
Partitioned views
Employ indices and constraints
Learn what's really going on under the hood - you should be able to understand the following concepts in detail:
Indexes (not just what they are but actually how they work).
Clustered indexes vs heap allocated tables.
Text and binary lookups and when they can be in-lined.
Fill factor.
How records are ghosted for update/delete.
When page splits happen and why.
Statistics, and how they effect various query speeds.
The query planner, and how it works for your specific database (for instance on some systems "select *" is slow, on modern MS-Sql DBs the planner can handle it).
The biggest thing you can do is to look for table scans in sql server query analyzer (make sure you turn on "show execution plan"). Otherwise there are a myriad of articles at MSDN and elsewhere that will give good advice.
As an aside, when I started learning to optimize queries I ran sql server query profiler against a trace, looked at the generated SQL, and tried to figure out why that was an improvement. Query profiler is far from optimal, but it's a decent start.
There are a couple of things you can look at to optimize your query performance.
Ensure that you just have the minimum of data. Make sure you select only the columns you need. Reduce field sizes to a minimum.
Consider de-normalising your database to reduce joins
Avoid loops (i.e. fetch cursors), stick to set operations.
Implement the query as a stored procedure as this is pre-compiled and will execute faster.
Make sure that you have the correct indexes set up. If your database is used mostly for searching then consider more indexes.
Use the execution plan to see how the processing is done. What you want to avoid is a table scan as this is costly.
Make sure that the Auto Statistics is set to on. SQL needs this to help decide the optimal execution. See Mike Gunderloy's great post for more info. Basics of Statistics in SQL Server 2005
Make sure your indexes are not fragmented. Reducing SQL Server Index Fragmentation
Make sure your tables are not fragmented. How to Detect Table Fragmentation in SQL Server 2000 and 2005
Use a with statment to handle query filtering.
Limit each subquery to the minimum number of rows possible.
then join the subqueries.
WITH
master AS
(
SELECT SSN, FIRST_NAME, LAST_NAME
FROM MASTER_SSN
WHERE STATE = 'PA' AND
GENDER = 'M'
),
taxReturns AS
(
SELECT SSN, RETURN_ID, GROSS_PAY
FROM MASTER_RETURNS
WHERE YEAR < 2003 AND
YEAR > 2000
)
SELECT *
FROM master,
taxReturns
WHERE master.ssn = taxReturns.ssn
A subqueries within a with statement may end up as being the same as inline views,
or automatically generated temp tables. I find in the work I do, retail data, that about 70-80% of the time, there is a performance benefit.
100% of the time, there is a maintenance benefit.
I think using SQL query analyzer would be a good start.
In Oracle you can look at the explain plan to compare variations on your query
Make sure that you have the right indexes on the table. if you frequently use a column as a way to order or limit your dataset an index can make a big difference. I saw in a recent article that select distinct can really slow down a query, especially if you have no index.
The obvious optimization for SELECT queries is ensuring you have indexes on columns used for joins or in WHERE clauses.
Since adding indexes can slow down data writes you do need to monitor performance to ensure you don't kill the DB's write performance, but that's where using a good query analysis tool can help you balanace things accordingly.
Indexes
Statistics
on microsoft stack, Database Engine Tuning Advisor
Some other points (Mine are based on SQL server, since each db backend has it's own implementations they may or may not hold true for all databases):
Avoid correlated subqueries in the select part of a statement, they are essentially cursors.
Design your tables to use the correct datatypes to avoid having to apply functions on them to get the data out. It is far harder to do date math when you store your data as varchar for instance.
If you find that you are frequently doing joins that have functions in them, then you need to think about redesigning your tables.
If your WHERE or JOIN conditions include OR statements (which are slower) you may get better speed using a UNION statement.
UNION ALL is faster than UNION if (And only if) the two statments are mutually exclusive and return the same results either way.
NOT EXISTS is usually faster than NOT IN or using a left join with a WHERE clause of ID = null
In an UPDATE query add a WHERE condition to make sure you are not updating values that are already equal. The difference between updating 10,000,000 records and 4 can be quite significant!
Consider pre-calculating some values if you will be querying them frequently or for large reports. A sum of the values in an order only needs to be done when the order is made or adjusted, rather than when you are summarizing the results of 10,000,000 million orders in a report. Pre-calculations should be done in triggers so that they are always up-to-date is the underlying data changes. And it doesn't have to be just numbers either, we havea calculated field that concatenates names that we use in reports.
Be wary of scalar UDFs, they can be slower than putting the code in line.
Temp table tend to be faster for large data set and table variables faster for small ones. In addition you can index temp tables.
Formatting is usually faster in the user interface than in SQL.
Do not return more data than you actually need.
This one seems obvious but you would not believe how often I end up fixing this. Do not join to tables that you are not using to filter the records or actually calling one of the fields in the select part of the statement. Unnecessary joins can be very expensive.
It is an very bad idea to create views that call other views that call other views. You may find you are joining to the same table 6 times when you only need to once and creating 100,000,00 records in an underlying view in order to get the 6 that are in your final result.
In designing a database, think about reporting not just the user interface to enter data. Data is useless if it is not used, so think about how it will be used after it is in the database and how that data will be maintained or audited. That will often change the design. (This is one reason why it is a poor idea to let an ORM design your tables, it is only thinking about one use case for the data.) The most complex queries affecting the most data are in reporting, so designing changes to help reporting can speed up queries (and simplify them) considerably.
Database-specific implementations of features can be faster than using standard SQL (That's one of the ways they sell their product), so get to know your database features and find out which are faster.
And because it can't be said too often, use indexes correctly, not too many or too few. And make your WHERE clauses sargable (Able to use indexes).