Is the addition of a second ID column beneficial to index? - sql-server-2005

Let's say I have a table tbl_FacilityOrders with two foreign keys fk_FacilityID and fk_OrderID in SQL Server 2005. It could contain orders from a few hundred facilities. I need to query single records and will have both the facilityID and the orderID available to me. Is it better to define an index on fk_FacilityID then fk_OrderID and pass the both to the query or to just use fk_OrderID. Since there will be less facility IDs than order IDs, I could see weeding out the other facilities' records first possibly being beneficial.
A second question is, if I were using the two columnn query above, does the order I write my WHERE clause columns in matter or is is the engine smart enough to evaluate them in the order of the index?
E.G. Is:
WHERE fk_facilityID = #FacilityID AND fk_OrderID = #OrderID
equivalent to:
WHERE fk_OrderID = #OrderID AND fk_FacilityID = #FacilityID
?

Is it better to define an index on fk_FacilityID then fk_OrderID and pass the both to the query or to just use fk_OrderID.
If OrderId is unique, there's no real added benefit to adding the other field for the scenario given. It is a good idea to index your FKs, though, since they will always been a JOIN key.
if I were using the two columnn query above, does the order I write my WHERE clause columns in matter or is is the engine smart enough to evaluate them in the order of the index?
Nope, order is irrelevant here. All that matters is that the SETS of fields match, i.e. FieldA and FieldB are both in the index and in the WHERE clause.
The order of fields in the index DOES matter, though. You can't use the second field in an index without knowing the value of the first field.

You should create an index for each of your foreign keys... not just the purpose of this question, but because indexing your foreign keys is good practice in general.
To answer your second question, the two statements are equivalent. SQL Server should internally re-order the statements to arrive at the optimal execution plan... however, you should always validate the generated execution plan just to make sure that its behaving as you would expect.

Related

Do clustered index on a column GUARANTEES returning sorted rows according to that column [duplicate]

This question already has an answer here:
Does a SELECT query always return rows in the same order? Table with clustered index
(1 answer)
Closed 8 years ago.
I am unable to get clear cut answers on this contentious question .
MSDN documentation mentions
Clustered
Clustered indexes sort and store the data rows in the table or view
based on their key values. These are the columns included in the
index definition. There can be only one clustered index per table,
because the data rows themselves can be sorted in only one order.
The only time the data rows in a table are stored in sorted order is
when the table contains a clustered index. When a table has a
clustered index, the table is called a clustered table. If a table
has no clustered index, its data rows are stored in an unordered
structure called a heap.
While I see most of the answers
Does a SELECT query always return rows in the same order? Table with clustered index
http://sqlwithmanoj.com/2013/06/02/clustered-index-do-not-guarantee-physically-ordering-or-sorting-of-rows/
answering negative.
What is it ?
Just to be clear. Presumably, you are talking about a simple query such as:
select *
from table t;
First, if all the data on the table fits on a single page and there are no other indexes on the table, it is hard for me to imagine a scenario where the result set is not ordered by the primary key. However, this is because I think the most reasonable query plan would require a full-table scan, not because of any requirement -- documented or otherwise -- in SQL or SQL Server. Without an explicit order by, the ordering in the result set is a consequence of the query plan.
That gets to the heart of the issue. When you are talking about the ordering of the result sets, you are really talking about the query plan. And, the assumption of ordering by the primary key really means that you are assuming that the query uses full-table scan. What is ironic is that people make the assumption, without actually understanding the "why". Furthermore, people have a tendency to generalize from small examples (okay, this is part of the basis of human intelligence). Unfortunately, they see consistently that results sets from simple queries on small tables are always in primary key order and generalize to larger tables. The induction step is incorrect in this example.
What can change this? Off-hand, I think that a full table scan would return the data in primary key order if the following conditions are met:
Single threaded server.
Single file filegroup
No competing indexes
No table partitions
I'm not saying this is always true. It just seems reasonable that under these circumstances such a query would use a full table scan starting at the beginning of the table.
Even on a small table, you can get surprises. Consider:
select NonPrimaryKeyColumn
from table
The query plan would probably decide to use an index on table(NonPrimaryKeyColumn) rather than doing a full table scan. The results would not be ordered by the primary key (unless by accident). I show this example because indexes can be used for a variety of purposes, not just order by or where filtering.
If you use a multi-threaded instance of the database and you have reasonably sized tables, you will quickly learn that results without an order by have no explicit ordering.
And finally, SQL Server has a pretty smart optimizer. I think there is some reluctance to use order by in a query because users think it will automatically do a sort. SQL Server works hard to find the best execution plan for the query. IF it recognizes that the order by is redundant because of the rest of the plan, then the order by will not result in a sort.
And, of course you want to guarantee the ordering of results, you need order by in the outermost query. Even a query like this:
select *
from (select top 100 t.* from t order by col1) t
Does not guarantee that the results are ordered in the final result set. You really need to do:
select *
from (select top 100 t.* from t order by col1) t
order by col1;
to guarantee the results in a particular order. This behavior is documented here.
Without ORDER BY, there is no default sort order even if you have clustered index
in this link there is a good example :
CREATE SCHEMA Data AUTHORIZATION dbo
GO
CREATE TABLE Data.Numbers(Number INT NOT NULL PRIMARY KEY)
GO
DECLARE #ID INT;
SET NOCOUNT ON;
SET #ID = 1;
WHILE #ID < 100000 BEGIN
INSERT INTO Data.Numbers(Number)
SELECT #ID;
SET #ID = #ID+1;
END
CREATE TABLE Data.WideTable(ID INT NOT NULL
CONSTRAINT PK_WideTable PRIMARY KEY,
RandomInt INT NOT NULL,
CHARFiller CHAR(1000))
GO
CREATE VIEW dbo.WrappedRand
AS
SELECT RAND() AS random_value
GO
CREATE ALTER FUNCTION dbo.RandomInt()
RETURNS INT
AS
BEGIN
DECLARE #ret INT;
SET #ret = (SELECT random_value*1000000 FROM dbo.WrappedRand);
RETURN #ret;
END
GO
INSERT INTO Data.WideTable(ID,RandomInt,CHARFiller)
SELECT Number, dbo.RandomInt(), 'asdf'
FROM Data.Numbers
GO
CREATE INDEX WideTable_RandomInt ON Data.WideTable(RandomInt)
GO
SELECT TOP 100 ID FROM Data.WideTable
OUTPUT:
1407
253
9175
6568
4506
1623
581
As you have seen, the optimizer has chosen to use a non-clustered
index to satisfy this SELECT TOP query.
Clearly you cannot assume that your results are ordered unless you
explicitly use ORDER BY clause.
One must specify ORDER BY in the outermost query in order to guarantee rows are returned in a particular order. The SQL Server optimizer will optimize the query and data access to improve performance which may result in rows being returned in a different order. Examples of this are allocation order scans and parallelism. A relational table should always be viewed as an unordered set of rows.
I wish the MSDN documentation were clearer about this "sorting". It is more correct to say that SQL Server b-tree indexes provide ordering by 1) storing adjacent keys in the same page and 2) linking index pages in key order.

SQL Server - Indexing and Operator precedence

I'm having table Student as below
Student(id,jdate)
where column id is primary key. Now I'm writing a query as below
select * from Student where id=2 and (jdate='date1' or jdate='date2')
Will index work here? or Can i modify as below?
select * from Student where (id=2) and (jdate='date1' or jdate='date2')
Both your examples will use the PK Index for column 'id'.
In case it is not clear, The operator "=" has precedence over "and", and thus the parenthesis are not necessary.
Since you are declaring a PK on the id column then you are defining a unique clustered index on the table as well. And since you are using the id column in the where clause then the index should be used.
The two queries, both of them, will use the index and the parenthesis around id = 2 don't change anything in the logic / condition evaluation.
Yes, both queries will work and will both hit any relevant clustered or non clustered index.
Given that id is your table PK, you probably won't even hit any index on jDate. (i.e. although at first glance the index (id, jdate) seems useful, in practice it will be redundant given that id is the PK and queries targetting a single id will either use the Clustered Index (if the default PK clustering is used), or the PK Constraint itself (if the table has different clustering).
Although the spurious parenthesis around id = 2 will be ignored, obviously and has precedence over or, so the parenthesis surrounding the or is essential:
... and (... or ...)
As other users said - both queries are the same. And PK index will be used. If you have any doubts about which index is used (in this or other queries) see execution plan: (http://technet.microsoft.com/en-us/library/ms178071%28v=sql.105%29.aspx)
Execution plan is a very useful tool. For example may prompt the missing indexes

Index for table in SQL Server 2012

I had a question on indexes. I have a table like this:
id BIGINT PRIMARY KEY NOT NULL,
cust_id VARCHAR(8) NOT NULL,
dt DATE NOT NULL,
sale_type VARCHAR(10) NOT NULL,
sale_type_sub VARCHAR(40),
amount DOUBLE PRECISION NOT NULL
The table has several million rows. Assuming that queries will often filter results by date ranges, sale types, amounts above and below certain values, and that joins will occur on cust_id... what do you all think is the ideal index structure?
I wasn't sure if a clustered index would be best, or individual indexes on each column? Both?
Any serious table in SQL Server should always have a well-chosen, good clustering key - it makes so many things faster and more efficient. From you table structure, I'd use the ID as the clustering key.
Next, you say joins occur on cust_id - so I would put an index on cust_id. This speeds up joins in general and is a generally accepted recommendation.
Next, it really depends on your queries. Are they all using the same columns in their WHERE clauses? Or do you get queries that use dt, and others that use sale_type separately?
The point is: the fewer indices the better - so if ever possible, I'd try to find one compound index that covers all your needs. But if you have an index on three columns (e.g. on (sale_type, dt, amount), then that index can be used for queries
using all three columns in the WHERE clause
using sale_type and dt in the WHERE clause
using only sale_type in the WHERE clause
but it could NOT be used for queries that use dt or amount alone. A compound index always requires you to use the n left-most columns in the index definition - otherwise it cannot be used.
So my recommendation would be:
define the clustering key on ID
define a nonclustered index on cust_id for the JOINs
examine your system to see what other queries you have - what criteria is being used for selection, how often do those queries execute? Don't over-optimize a query that's executed once a month - but do spend time on those that are executed dozens of times every hour.
Add one index at a time - let the system run for a bit - do you measure an improvement in query times? Does it feel faster? If so: leave that index. If not: drop it again. Iterate until you're happy with the overall system performance.
The best way to find indexes for your table is sql server profiler.

SQL Server index included columns

I need help understanding how to create indexes. I have a table that looks like this
Id
Name
Age
Location
Education,
PhoneNumber
My query looks like this:
SELECT *
FROM table1
WHERE name = 'sam'
What's the correct way to create an index for this with included columns?
What if the query has a order by statement?
SELECT *
FROM table1
WHERE name = 'sam'
ORDER BY id DESC
What if I have 2 parameters in my where statement?
SELECT *
FROM table1
WHERE name = 'sam'
AND age > 12
The correct way to create an index with included columns? Either via Management Studio/Toad/etc, or SQL (documentation):
CREATE INDEX idx_table_1 ON db.table_1 (name) INCLUDE (id)
What if the Query has an ORDER BY
The ORDER BY can use indexes, if the optimizer sees fit to (determined by table statistics & query). It's up to you to test if a composite index or an index with INCLUDE columns works best by reviewing the query cost.
If id is the clustered key (not always the primary key though), I probably wouldn't INCLUDE the column...
What if I have 2 parameters in my where statement?
Same as above - you need to test what works best for your query. Might be composite, or include, or separate indexes.
But keep in mind that:
tweaking for one query won't necessarily benefit every other query
indexes do slow down INSERT/UPDATE/DELETE statements, and require maintenance
You can use the Database Tuning Advisor (DTA) for index recommendations, including when some are redundant
Recommended reading
I highly recommend reading Kimberly Tripp's "The Tipping Point" for a better understanding of index decisions and impacts.
Since I do not know which exactly tasks your DB is going to implement and how many records in it, I would suggest that you take a look at the Index Basics MSDN article. It will allow you to decide yourself which indexes to create.
If ID is your primary and/or clustered index key, just create an index on Name, Age. This will cover all three queries.
Included fields are best used to retrieve row-level values for columns that are not in the filter list, or to retrieve aggregate values where the sorted field is in the GROUP BY clause.
If inserts are rare, create as much indexes as You want.
For first query create index for name column.
Id column I think already is primary key...
Create 2nd index with name and age. You can keep only one index: 'name, ag'e and it will not be much slower for 1st query.

Unique index on two columns plus separate index on each one?

I don't know much about database optimization, but I'm trying to understand this case.
Say I have the following table:
cities
===========
state_id integer
name varchar(32)
slug varchar(32)
Now, say I want to perform queries like this:
SELECT * FROM cities WHERE state_id = 123 AND slug = 'some_city'
SELECT * FROM cities WHERE state_id = 123
If I want the "slug" for a city to be unique within its particular state, I'd add a unique index on state_id and slug.
Is that index enough? Or should I also add another on state_id so the second query is optimized? Or does the second query automatically use the unique index?
I'm working on PostgreSQL, but I feel this case is so simple that most DBMS work similarly.
Also, I know this surely doesn't make a difference on small tables, but my example is a simple one. Think of 200k+ rows tables.
Thanks!
A single unique index on (state_id, slug) should be sufficient. To be sure, of course, you'll need to run EXPLAIN and/or ANALYZE (perhaps with the help of something like http://explain.depesz.com/), but ultimately what indexes are appropriate depends very closely on what kind of queries you will be running. Remember, indexes make SELECTs faster and INSERTs, UPDATEs, and DELETEs slower, so you ideally want only as many indexes as are actually necessary.
Also, PostgreSQL has a smart query optimizer: it will use radically different search plans for queries on small tables and huge tables. If the table is small, it will just do a sequential scan and not even bother with any indexes, since the overhead of working with them is higher than just brute-force sifting through the table. This changes to a different plan once the table size passes a threshold, and may change again if the table gets larger again, or if you change your SELECT, or....
Summary: you can't trust the results of EXPLAIN and ANALYZE on datasets much smaller or different than your actual data. Make it work, then make it fast later (if you need to).
[EDIT: Misread the question... Hopefully my answer is more relevant now!]
In your case, I'd suggest 1 index on (state_id, slug). If you ever need to search just by slug, add an index on just that column. If you have those, then adding another index on state_id is unnecessary as the first index already covers it.
An index can be used whenever an initial segment of its columns are used in a WHERE clause. So e.g. an index on columns A, B and C will optimise queries containing WHERE clauses involving A, B and C, WHERE clauses with just A and B, or WHERE clauses with just A. Note that the order that columns appear in the index definition is very important -- this example index cannot be used for WHERE clauses involving just B and/or C.
(Of course it's up to the query optimiser whether or not a particular index actually gets used, but in your case with 200k rows, you can guarantee that a simple search by state_id or slug or both will use one of the indices.)
Any decent optimizer will see an index on three columns - say:
CREATE INDEX idx_1 ON SomeTable(Col1, Col2, Col3);
and will use that index for any of the following conditions:
WHERE Col1 = ...something...
WHERE Col1 = ...something... AND Col2 = ...otherthing...
WHERE Col3 = ....whatnot....
AND Col1 = ...something....
AND Col2 = ...otherthing...
That is, it will use the index if there are conditions applied to any contiguous leading subset of the columns of the index. Although I used equality, it can also apply to ranges (open - just greater than, for example) or closed (between two values).
To do optimization use EXPLAIN http://www.postgresql.org/docs/7.4/static/sql-explain.html and see for your self.
But optimization is not the most important reason to make those indexes; first it is a constraint inhibiting a database from not being logical.