I have a large table in Microsoft SQL Server 2008. It has two indexes. One index having column A descending order and another index having the column A ascending with some other columns.
My application is doing below:
Select for the record.
If there is no record then insert
If find then update the records
Note that this table has millions of records.
The question is: Are these indexes affect the any select/insert/update performance?
Any suggestions?
Having the exact same two indexes with the only difference being the ordering will make no difference to the SQL engine and will just pick either (practially).
Imagine you 2 dictionaries of the english language, one sorts words from A to Z and the other from Z to A. The effort you will need to search for a particular word will be roughly the same in both cases.
A different case would be if you had 2 lists of people's data, one ordered by first name then last name and the other by last name first, then first name. If you have to look for "John Doe", the one that's ordered first by last name will be practically useless compared to the other one.
These examples are very simplified representations of indexes on SQL Server. Indexes store their data on a structure that's called a B-Tree, but for searching purposes these examples work to understand when will a index be useful or not.
Resuming: you can drop the first index and keep the one that has additional columns on it, since it can be used for more different scenarios and also all cases that would require the other one. Keeping an unuseful index brings additional maintenance tasks like keeping the index updated on every insert, update, delete and refreshing statistics, so you might want to drop it.
PD: As for the 2nd index being used, this greatly depends on the query you are using to access that particular table, it's joins and where clauses. You can use the "Display Estimated Execution Plan" having highlighted the query on SSMS to see the access plan of the engine to each object to perform the operation. If the index is used, you will see it there.
Thanks for all the answers. I explored the SQL Server Profiler and SQL Tuning Advisor for the SQL Server and ran them to get the recommended indexes. It recommended a single index with include options. I used the index and the performance has been improved.
Related
What is the difference between using Indexes in SQL vs Using the ORDER BY clause?
Since from what I understand , the Indexes arrange the specified column(s) in an ordered manner that helps the query engine in looking through the tables quickly (and hence prevents table scan).
My question - why can't the query engine simply use the ORDER BY for improving performance?
Thanks !
You put the tag as sql-server-2008 but the question has nothing to do with SQL server. This question will apply to all databases.
From wikipedia:
Indexing is a technique some storage engines use for improving
database performance. The many types of indexes share the common
property that they reduce the need to examine every entry when running
a query. In large databases, this can reduce query time/cost by orders
of magnitude. The simplest form of index is a sorted list of values
that can be searched using a binary search with an adjacent reference to the location of the entry, analogous to the index in the back of a book. The same data can have multiple indexes (an employee database could be indexed by last name and hire date).
From a related thread in StackExchange
In the SQL world, order is not an inherent property of a set of data.
Thus, you get no guarantees from your RDBMS that your data will come
back in a certain order -- or even in a consistent order -- unless you
query your data with an ORDER BY clause.
To answer why the indexes are necessary?
Note the bolded text about indexing regarding the reduction in the need to examine every entry. In the absence of an index when an ORDER BY is issued in SQL, every entry need to be examined which increases the number of entries.
ORDER BY is applied only when reading. A single column may be used in indexes in which case there could be several different kinds of ordering in sql query requests. It is not possible to define the indexes unless we understand how the query requests are made.
A lot of times indexes are added once new patterns of querying emerge so as to keep the query performant which mean index creation is driven by how you defined your ORDER BY in SQL.
Query engine which processes your SQL with/without ORDER BY, defines your execution plan and does not understand Storage of data. The Data retrieved from a query engine may be partly from memory if the data was in cache and partly/fully from disk. When reading from disk in the storage engine will uses the indexes to figure the quickly read data.
ORDER BY effects the performance of a query when reading. Index effects the performance of a query when doing all the Create, Read, Update and Delete operations.
A query engine may choose to use an index or totally ignore the index based on the data characteristics.
I have one table Person with two columns Name and Gender and suppose in my application if I have a query which is called frequently :
select * from Person where Gender = 'M'
So is it advisable to create an index on the column Gender?
It's not advisable unless there is loads of one an only a few of the other and your query only looks at the few. A full table scan would give you a much more efficient result than diving through an index. In fact, even if you created the index, it's highly unlikely the optimiser would use it.
Below points might give you the idea:
From Documentation
In general, index access paths are more efficient for statements that retrieve a small subset of table rows, whereas full table scans are more efficient when accessing a large portion of a table.
Do not index columns that are modified frequently. UPDATE statements that modify indexed columns and INSERT and DELETE statements that modify indexed tables take longer than if there were no index. Such SQL statements must modify data in indexes as well as data in tables. They also generate additional undo and redo.
When choosing to index a key, consider whether the performance gain for queries is worth the performance loss for INSERTs, UPDATEs, and DELETEs and the use of the space required to store the index. You might want to experiment by comparing the processing times of the SQL statements with and without indexes. You can measure processing time with the SQL trace facility.
I'd appreciate some advice from SQL Server gurus here. Let me explain...
I have an SQL Server 2008 database table that has 21 columns. Here's a quick type of those:
INT Primary Key
Several other INT's that are indexes already (used to reference this and other tables)
Several NVARCHAR(64) to hold user-provided text
Several NVARCHAR(256) to hold longer user-provided text
Several DATETIME2
One BIGINT
Several UNIQUEIDENTIFIER, one is already an index
The way this table is used is that it is presented to a user as a sortable table and a user can choose which column to sort it by. This table may contain many thousands of records (like currently it does 21,000 and it will be growing.)
So my question is, do I need to set each column as an INDEX to enable faster sorting?
PS. Forgot to say. The output obviously supports pagination, so the user sees no more than 100 rows at once.
Contrary to popular belief, just having an index on a column does not guarantee that any queries will be any faster!
If you constantly use SELECT *.. from that table, these non-clustered indices on a single column will most likely not be used at all.
A good nonclustered index is a covering index, which means, it contains all the necessary columns to satisfy one or multiple given queries. If you have this situation, then a nonclustered index can make sense - otherwise, in more cases than not, the nonclustered index is likely to be ignored by the query optimizer. The reason for this being: if you need all the columns anyway, the query would have to do key lookups from the nonclustered index into the actual data (the clustered index) for each row found - and the key lookup is a very expensive operation, so doing this for a lots of hits becomes overly costly, and the query optimizer will rather quickly switch to a index scan (possibly the clustered index scan) to fetch the data.
Don't over-index - use a well-designed clustered index, put indices on the foreign key columns to speed up joins - and then let it be for the time being. Observe your system, measure performance, maybe add an index here or there - but don't just overload the system with tons of indices!
Having too many indices can be worse than having none - every index must be maintained, e.g. updated for each INSERT, UPDATE and DELETE statement - does that take time!
this table is ... presented to a user as a sortable table ... [that] may contain many thousands of records
If you're ordering many thousands of records for display, you're doing it wrong. Typical users can reasonably process at most around 500 typical records. Exceptional users can handle a couple thousand. Any more than that, and you're misleading your users into a false sense that they've seen a representative sample. This results in poor decision making and inefficient user workflow. Instead, you need to focus on a good search algorithm.
Another to keep in mind here is that more indexes means slower inserts and updates. It's a balancing act. Sql Server keeps statistics on what queries and sorts it actually performs, and makes those statistics available to you. There are queries you can run that tell you exactly what indexes Sql Server thinks it could use. I would deploy without any sorting index and let it run for a week or two that way. Then look at data and see what users actually sort on and index just those columns.
Take a look at this link for an example and introduction on finding missing indexes:
http://sqlserverpedia.com/wiki/Find_Missing_Indexes
Generally indexes use to accelerate WHERE conditions (in some cases JOINS). so I don't thinks create index on column except PRIMARY KEY accelerate sorting. you can do your sorting in clients(if you use win forms or wpf) or in database for web scenarios
Good Luck
I know there are similar questions on StackOverflow, but after testing different indexes on my tables, I think I don't quite understand how indexes work and I'd like it if someone could explain the behavior I'm experiencing on my queries' performance.
I'm using this query as an example, I'm going to try to explain it in detail:
SELECT ss1.PlayerID, ss1.Name, ss1.Series, ss1.LanesNum, ss1.Date, ss1.LeagueName, ss1.Season FROM SeriesScores ss1
JOIN (SELECT Series, Gender, LanesNum, Bowlout, Season FROM SeriesScores
WHERE Gender = ? AND LanesNum = ? AND Series > -1 AND Bowlout = 'No' AND Season = '2011-2012'
ORDER BY Series DESC LIMIT 0,?) as ss2
USING(series, gender, lanesNum, bowlout, season)
ORDER BY ss1.Series DESC
This query is used to get the highest series bowled in a given season for each pair of lanes in a bowling center for both male and female players.
I'm joining the table on itself instead of using the MAX aggregate function because if there's a tie on a given pair of lanes, I want all the names to come up.
Basically, I join all the fields that match what the inner SELECT returns. That inner SELECT returns the top X players for a given gender and a given pair of lanes.
The USING part makes sure only the players that haven't bowled out, with the same gender, series, lanesNum and season as I'm looking for get selected. I then order them by highest series to lowest series.
This query is in a for loop, which gets run 12 times for men and 12 times for women (12 pair of lanes in the bowling center) with only the lanesNum and gender parameters changing.
I then put all the results in two different vectors in Java to display the results in an application (one vector for men, one for women).
Without any indexes whatsoever, it takes around 11 seconds to run everything including putting the results in a vector and all of that. (5.5 seconds for the 12 queries for men, same for women).
With an index on (gender, lanesNum, series), it takes 0.04 seconds for the whole thing, which is amazing, since that's a more than acceptable speed for my needs.
I used that index because those are all the most important fields I'm using in my WHERE clause, but I don't get why it speeds things up that much, because I tried other things and using some other indexes actually made my queries SLOWER by more than 100%. Also, I'm wondering if I would get an even faster query if I added "bowlout" and "season" to that index.
I wanted to try a single column index on series first and test performance. That's the index that made all of those queries take a total of 22 seconds.
I came to the conclusion that I don't understand where I should be using my indexes and when I should be using them on multiple fields, or using multiple indexes on single fields, etc. Also, I don't understand how using (the wrong) indexes can actually make performance worse.
Optimizing an index too aggresively for just one query runs the risk of slowing down other queries (and thus a real world application, or the next version of it). However, let us do exactly that as an exercise in analysing index performance.
Indexes influence query performance in multiple ways; their existence can actually completely change the algorithm that the database server will use to get to the data. A nice overview is here, but as your query is simple, and you actually have very few relevant indexes in your database (the one you see, and also automatically created indexes to support the primary keys of your tables) we can simplify the story greatly.
A good index makes it faster to cross reference the data between the tables. Ideally it contains columns in your USING and WHERE clauses, and enough of them to reference a unique row in its table most of the time. If it contains less, it may still be used by the database server, but the remaining rows will have to be visited one by one.
An great index does not only all that, but it also contains all data that you will be selecting from the table (yes, this makes sense when the two tables are actually the same physical table due to the self-join; the database server still processes as if it was two different tables, incidentally with the same data). The benefit of such a "fully covering index" is that the database server does not have to visit its table at all; all the columns are available in the index.
Order of columns in the index matters. It is especially essential that the leftmost column in the index appears in the USING clause, or WHERE clause; otherwise the index is pretty much unusable as matching data for a single lookup can appear in many locations in that index. It should also be highly selective (have many different values in the table). Do a few experiments now to see this first hand.
For this reason, the first choice index I'd suggest to you would be series, gender, lanesNum, bowlout; but yours is also a very good one for this query.
There is not much use in creating more than one index explicitly. There is basically no use for more than one of them during query execution, because your query is so simple. So the most useful one will supposedly win and all the others will be ignored.
To your last question: some people believe that superfluous indexes only slow down UPDATE, INSERT and DELETE statements (because these carry the overhead to update the indexes), but it is not that simple. As the database server considers multiple algorithms to compute your query (there are two logical tables to start from and automatic and explicit indexes to use, or not to use), it may choose the wrong plan: an index may look seductive without knowing the data distribution in the table, but be very counterproductive given the distribution.
There is actually a way to let the database server analyze the data and record some statistics that will greatly help it optimize your subsequent queries reasonably and probably to avoid any 22 second executions of your query (until you change your data so much that the statistics will no longer hold true). That is the ANALYZE command. Issue it every time after you change your indexes to see the subsequent sqlite performance at its best. In a production database, schedule ANALYZE to execute every night, so that your database does not gradually slow down over time, or abruptly after adding a harmless, useless index.
In a certain app I must constantly query data that are likely to be amongst the last inserted rows. Since this table is going to grow a lot, I wonder if theres a standard way of optimizing the queries by making them start the lookup at the table's end. I think I would get the same optmization if the database stored data for the table in a stack-like structure, so the last inserted rows would be searched first.
The SQL spec doesn't mention anything about maintaining the insertion order. In practice, most of decent DB's also doesn't maintain it. Then it stops here. Sorting the table first ain't going to make it faster. Just index the column(s) of interest (at least the ones which you use in the WHERE).
One of the "tenets" of a proper RDBMS is that this kind of matters shouldn't concern you or anyone else using the DB.
The DB engine is "free" to use whatever method it wants to store/retrieve records, so if you want to enforce a "top" behaviour do what other suggested: add a timestamp field to the table (or tables), add an index on it and query using it as a sort and/or query criteria (e.g.: you poll the table each minute, and ask for records with timestamp>=systime-1 minute)
There is no standard way.
In some databases you can specify the sort order on an index.
SQL Server allows you to write ASC or DESC on an index:
[ ASC | DESC ]
Determines the ascending or descending sort direction for the particular index column. The default is ASC.
In MySQL you can also write ASC or DESC when you create the index but currently this is ignored. It might be implemented in a future version.
Add a counter or a time field in your table, sort on it and get top rows.
In other words: You should forget the idea that SQL tables are accessed in any particular order by default. A seqscan does not mean the oldest rows will be searched first, only that all rows will be checked. If you want to optimize some search you add indexes on some fields. What you are looking for is probably indexes.
If your data is indexed, it won't matter. The index is doing a binary search, not a sequential scan.
Unless you're doing TOP 1 (or something like it), the SELECT will have to scan the whole table or index anyway.
According to Data Independence you shouldn't care. That said a clustered index would probably suit your needs if you typically look for a date range. (sorting acs/desc shouldn't matter but you should try it out.)
If you find that you really need it you can also shard your database to increase perf on the most recently added data.
If you have enough rows that its actually becomming a problem, and you know how many "the most recently inserted rows" should be, you could try a round-about method.
Note: Even for pretty big tables, this is less efficient, but once your main table gets big enough, I've seen this work wonders for user-facing performance.
Create a "staging" table that exactly mimics your table's structure. Whenever you insert into your main table, also insert into your "staging" area. Limit your "staging" area to n rows by using a trigger to delete the lowest id row in the table when a new row over your arbitrary maximum is reached (say, 10,000 or whatever your limit is).
Then, queries can hit that smaller table first looking for the information. Since the table is arbitrarilly limited to the last n rows, it's only looking in the most recent data. Only if that fails to find a match would your query (actually, at this point a stored procedure because of the decision making) hit your main table.
Some Gotchas:
1) Make sure your trigger(s) is(are) set up properly to maintain the correct concurrancy between your "main" and "staging" tables.
2) This can quickly become a maintenance nightmare if not handled properly- and depending on your scenario it be be a little finiky.
3) I cannot stress enough that this is only efficient/useful in very specific scenarios. If yours doesn't match it, use one of the other answers.
ISO/ANSI Standard SQL does not consider optimization at all. For example the widely recognized CREATE INDEX SQL DDL does not appear in the Standard. This is because the Standard makes no assumptions about the underlying storage medium and nor should it. I regularly use SQL to query data in text files and Excel spreadsheets, neither of which have any concept of database indexes.
You can't do this.
However, there is a way to do something that might be even better. Depending on the design of your table, you should be able to create an index that keeps things in almost the order of entry. For example, if you adopt the common practice of creating an id field that autoincrements, then that index is just about in chronological order.
Some RDBMSes permit you to declare a backwards index, that is, one that descends instead of ascending. If you create a backwards index on the ID field, and if the optimizer uses that index, it will look at the most recent entries first. This will give you a rapid response for the first row.
The next step is to get the optimizer to use the index. You need to use explain plan to see if the index is being used. If you ask for the rows in order of id descending, the optimizer will almost certainly use the backwards index. If not you may be able to use hints to guide the optimizer.
If you still need to avoid reading all the rows in order to avoid wasting time, you may be able to use the LIMIT feature to declare that you only want, say 10 rows, and no more, or 1 row and no more. That should do it.
Good luck.
If your table has a create date, then I'd reverse sort by that and take the top 1.