SQL Execution Plan High Cost 70% - sql

Im running a query that is taking 2 seconds but it should perform better, so I run the Execute Plan details from SQL Managemenet Studio and I found a "step" in the process that the Cost is 70%.
Then, I right click on the item and I found an option that says "Missing Index Details", after I clicked that then a query with a recommendation is generated:
/*
Missing Index Details from SQLQuery15.sql - (local).application_prod (appprod (58))
The Query Processor estimates that implementing the following index could improve the query cost by 68.8518%.
*/
/*
USE [application_prod]
GO
CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>]
ON [dbo].[cloud_document] ([isactivedocument])
INCLUDE ([objectuid])
GO
*/
So my question is exactly what happens if I execute the query? Is it going to affect my database, is there any sideback or sideeffects after applying that?
Thanks a lot and appreciate in advance.

Running the query qill create an Index on the table specified (cloud_document).
This should improve the reading performance and improve performance/query time.
It does also affect the performance of INSERT/UPDATE/DELETE statements as the indexes needs to be maintained during these statements.
The decision to use indexing, how many indexes and what the index consists of is more an art than an exact science.
The actual maintinance of indexes, defragmenting, and statistics is something that can be automated, but should be left, until you have a better understanding of what indexes are and what they do.
I would recomend that you start reading some documentation regarding indexing.
May start with Stairway to SQL Server Indexes

The literal meaning is telling you to build an index on isactivedocument of table [dbo].[cloud_document], from which I assume you are using isactivedocument as a condition to filter the table, and select the column objectuid. Something like
select objectuid, ... from [dbo].[cloud_document] where isactivedocument = 0/1
Note that the "Clustered Index Scan (70%)" doesn't mean it's a problem of the clustered index. It means you don't have index on isactivedocument, then sql engine has to scan clustered index to find what you want. That's why it's taking so much pressure. Once you create index on isactivedocument, check out the plan again, and you'll see the node becomes "index seek", which is a much faster way to find out what you want.
Usually if your database stress mainly comes from query, new index doesn't do much harm to your system. Just go ahead and create the index. But of course you need to keep index quantities as less as possible.

Related

Why "IN " query tag is so costly in sql stored procedures?

How can I improve my performance issue? I have an sql query with 'IN' I guess 'IN' making some costly performance issue. But I need index my sql query?
My sql query:
SELECT [p].[ReferencedxxxId]
FROM [Common].[xxxReference] AS [p]
WHERE ([p].[IsDeleted] = 0)
AND (([p].[ReferencedxyzType] = #__refxyzType_0)
AND [p].[ReferencedxxxId] IN ('42342','ffsdfd','5345345345'))
My solution: (BUT I NEED YOUR HELP FOR BETTER ADVISE) Whichone is correct clustered or nonclustred index?
USE [xxx]
GO
CREATE NONCLUSTERED INDEX IX_NonClusteredIndexDemo_xxxId
ON [Common].[xxxReference](xxxId)
INCLUDE ([ID],[ReferencedxxxId])
WITH (DROP_EXISTING=ON, ONLINE=ON, FILLFACTOR=90)
GO
Second:
CREATE INDEX xxxReference_ReferencedxxxId_index
ON [Common].[xxxReference] (ReferencedxxxId)[/code]
Whichone is correct or do you have better solution?
The performance problem of this query is not the result of using the IN operator.
This operator performs very well with small lists (say, less than 1000 members).
The performance bottle neck here is the fact that SQL Server performs an index scan instead of an index seek (which is very costly), and the key lookup, which is 20% of the query cost.
To avoid both problems, you can add an index on IsDeleted, ReferencedxyzType and ReferencedxxxId - probably in this exact order.
SQL Performance tuning is a science that tends to look a little like art or magic - either way you look at it it requires a good knowledge of both the theory and practice of index settings and the relevant systems requirements.
Therefor, my suggestion is this: Do not attempt to solve it yourself with the help of strangers on the internet. Get an expert for a consulting job for a couple of hours/days to analyze the system and help you fine-tune it.
Learn whatever you can during this process. Ask questions about everything that is not trivial. This will be money well spent.
Couple of things:
If you have a SELECT statement inside the IN, that should be avoided
and should be replaced with an EXISTS clause. But in your above
example, that is not relevant as you have direct values inside IN.
Using EXISTS and NOT EXISTS instead of IN and NOT IN helps SQL
Server to not needing to scan each value of the column for each
values inside the IN / NOT IN and rather can short circuit the
search once a match or non-match found.
Avoid the implicit conversion. They degrade the performance due to
many reasons including i> SQL Server not able to find proper
statistics on an index and hence not able to leverage an index and
would rather go make use of a clustered index available in the table
(which may not be covering your query), ii> Not assigning proper
required RAM during memory allocation phase of the query by storage
engine, iii> Cardinality estimation becomes wrong as SQL Server
would not have statistics on the computed value of that column, and
rather probably had statistics on that column.
If you look at your execution plan posted above, you will see a
yellow mark in your 'SELECT'. If you hover over it, you will see
one/more warning messages. If your warning is related to implicit
conversion, try to use proper datatypes during comparison.
Eg. What is the datatype of the column '[ReferencedxxxId]'? If it
is not an NVARCHAR and is rather a VARCHAR, then I would suggest:
Make the values inside the IN as VARCHAR (currently you are making them NVARCHAR). This way you will still be able to take full advantage of the rowstore index created on [ReferencedxxxId] column.
If you must have the values as NVARCHAR inside the IN clause, then you should:
CONVERT/CAST the column [ReferencedxxxId] in your IN clause. This is going to get rid of the Implicit conversion but you will no longer be able to take full advantage of the rowstore index on [ReferencedxxxId] column.
+
Rather create a clustered/nonclustered columnstore index on the table covering the columns used in the query. This should significantly enhance the performance of your SELECT query.
If you decided to go with the route of using rowstore index by correcting the values inside the IN, you need to make sure that you create a clustered/nonclustered index which covers the query. Meaning the index covers the columns on which you are doing search ([ReferencedxxxId], [ReferencedxxxType], [IsDeleted]) and then including the columns used in SELECT statement under INCLUDE clause (if it is a nonclustered index)
Also, when you are creating a composite rowstore index, try to keep the order of columns inside the index high cardinality to low cardinality from left to right to make the best use of that index.
On the basis of assuming an OLTP based system and not OLAP, my first pass would be an NC Index - given isDeleted is likely to have the least selectivity, I would place it last, first pass would be an NC index ReferencedxyzType, ReferencedxxxId, IsDeleted
I might even be tempted in a higher volume scenario to move the IsDeleted out of the index onto an include instead, since it provides so little selectivity to the index itself.
There is clearly already a clustered index in place on the table (from the query plan we can see it), we don't have the details of what is in it.
The question around clustered vs non-clustered is more complex and requires a lot more knowledge of the system and usage.

Does PostgreSQL need VACUUM or ANALYZE schedule to make use of indexes?

When creating indexes on PostgreSQL tables, EXPLAIN ANALYZE followed by an SQL command shows which indexes are used.
For example:
EXPLAIN ANALYZE SELECT A,B,C FROM MY_TABLE WHERE C=123;
Returns:
Seq Scan on public.my_table (cost=...) <- No index, BAD
And, after creating the index, it would return:
Index Scan using my_index_name on public.my_table (cost=...) <- Index, GOOD
However, for some queries that used the same index with a few hundred records, it didn't make any difference. Reading through documentation, it is recommended that either run ANALYZE or have the "Autovacuum" daemon on. This way the database would know the size of tables and decide on query plans properly.
is this absolutely necessary in a production environment? In other words, will PostgreSQL use the index when it's time to use it without need to analyse or vacuum as an extra task?
Short answer "just run autovacuum." Long answer... yes, because statistics can get out of date.
Let's talk about indexes and how/when PostgreSQL decides to use them.
PostgreSQL gets a query in, parses it, and then begins the planning process. How are we going to scan the tables? How are we going to join them and in what order? These are not trivial decisions and trying to find the generally best ways to do things typically means that PostgreSQL needs to know something about the tables.
The first thing to note is that indexes are not always a win. No plan ever beats a sequential scan through a one-page table, and even a 5 page table will almost always be faster with a sequential scan than an index scan. So PostgreSQL cannot safely decide to "use all available indexes."
So the way PostgreSQL decides whether to use an index is to check statistics. Now, these go out of date, which is why you want autovacuum to be updating them. You say your table has a few hundred records and the statics were probably out of date. If PostgreSQL cannot say that the index is a win, it won't use it. A few hundred records is going to be approaching "an index might help" territory depending on how selective the index is in weeding out records.
In your large table, there was probably no question based on existing statistics that the index would help. In your smaller table, there probably was a question and it got answered one way based on the stats it had, and a different way based on newer stats.

Missing Index Details SQL

I am tuning my SQL server and when I show my execution plan for one of my queries at the top it reads:
"Missing Index (Impact 99.7782): CREATE NONCLUSTERED INDEX..."
So I looked at the missing index details and it is showing this:
/*
Missing Index Details from ExecutionPlan1.sqlplan
The Query Processor estimates that implementing the following index could improve the query cost by 99.7782%.
*/
/*
USE [phsprod]
GO
CREATE NONCLUSTERED INDEX [<Name of Missing Index, sysname,>]
ON [dbo].[address] ([userid])
GO
*/
I have only been working with SQL for about a month now and I have never done anything with this as all my tables have been built for me already. Can anyone help explain/give me any ideas on what to do with this? Thanks.
That means SQL Server is suggesting that your query could run faster with an index. Indexes add overhead and disk storage, so you should ignore this hint unless the query is giving performance problems in production.
To create the index, uncomment the statement after use, replace [<Name of Missing Index, sysname,>] with a real name, and run it:
USE [phsprod]
GO
CREATE NONCLUSTERED INDEX IX_Address_UserId
ON [dbo].[address] ([userid])
That means SQL Server is suggesting that your query could run faster with this index.
It can mean that your current indexes are not the greatest for the query you are running. Maybe your query could be optimised. Or maybe you COULD add the index. But if you decide to do this, you have to analyse carefully.
Indeed, indexes add overhead and disk storage. But, it can also improve performance.
For instance, if you always search in your table based on a "userid", then maybe it can payoff to add an index on that column, since SQL will be able to search usign this index.
Think a little bit of this like if you search for a word in a dictionnary.
If your looking for the word "dog", your going to search for "d" and then words that begin with "do" to finally find the word "dog".
If the words were not in alphabetical order in the dictionnary, you would have to search the whole dictionnary to find the word "dog"!
A clustered index (or a primary key) is the order of your columns.
Right now, it seems that you don't have an index on the column "userid". So SQL Server has (probably) to scan the entire table until he finds the userid.
If you add a nonclustered index, it will not re-order your table, but it will tell SQL Server between what range he should search to find the userid you want to. (Like "in the dictionnary, between page 20 and 30") So it will not have to search the whole table to find it.
But it also means that when you add new data to the table, or remove, or modify, he needs to keep his index up-to-date. Generally, a few indexes don't hurt, but you need to be sure they are needed. You don't want to add too much indexes as they can hurt performances if you add too much.
And if your table contains only a few hundreds of rows, maybe it won't show you a big improvement of performances. But over time, when your table grows, it may make a difference.
Hope that helps!

Intrepret the results of the index usage query

I have added an index for a table in sql server 2008. I would like to know how much impact the index has on the table and if the index is useful or not.
Thanks.
The best way to tell is to look at execution plans for queries run against the table.
You can look at index usage DMVs but they only tell you how many queries used the index. Whether that is a one-row seek or a 10 million row scan, there's no difference in the recorded stats.
Did you make any measurments prior to making the change? If you did a baseline measurment prior to modifing the system run the same baseline then compare the results.
Two things the DMVs for indexes can tell use is how much the index is used to satisfy queries and how much it cost to keep an index. As the ratio (index usage / index) updates gets smaller the more time the DBA should take in deciding if the index is needed.

Sql indexes vs full table scan

While writing complex SQL queries, how do we ensure that we are using proper indexes and avoiding full table scans? I do it by making sure I only join on columns that have indexes(primary key, unique key etc). Is this enough?
Ask the database for the execution plan for your query, and proceed from there.
Don't forget to index the columns that appear in your where clause as well.
Look at the execution plan of the query to see how the query optimizer thinks things must be retrieved. The plan is generally based on the statistics on the tables, the selectivity of the indices and the order of the joins. Note that the optimizer can decide that performing a full table scan is 'cheaper' than index lookup.
other Things to look for:
avoid subqueries if possible.
minimize the use of 'OR'-predicates
in the where clause
It is hard to say what is the best indexing because there are different strategies depend on situation. Still there are coupe things you should now about indexes.
Index SOMETIMES increase performance on select statement and ALWAYS decrease performance on insert and update.
To index table it is not necessary to make it as key on certain field. Also, real life indexes almost always include several fields.
Don't create any indexes for "future purposes" if your performance is satisfactory. Even if you don't have indexes at all.
Always try to analyze execution plan when tuning indexes. Don't be afraid to experiment.
+
Table scan is not always bad thing.
That is all from me.
Use Database Tuning Advisor(SQL Server) to analyse your query. It will suggest necessary indexes to add to tune your query performance