I have a table where I store comments for user users. I will have 100 Million+ comments.
2 ways I can create it:
Option 1: user name and comment id as PK. That way all comments are stored physically by user name and comment id.
CREATE TABLE [dbo].[Comments](
[user] [varchar](20) NOT NULL,
[com_id] [int] IDENTITY(1,1) NOT NULL,
[com_posted_by] [varchar](20) NOT NULL,
[com_posted_on] [smalldatetime] NOT NULL CONSTRAINT DEFAULT (getdate()),
[com_text] [nvarchar](225) COLLATE NOT NULL,
CONSTRAINT [PK_channel_comments] PRIMARY KEY CLUSTERED
([channel] ASC, [com_id] ASC) WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY]) ON [PRIMARY]
Pros: My query will be get all or top 10 comments for a user order by comment_id DESC. This is SEEK
Option 2: I can make the comment id as the PK. That will store the comments sorted by the comment id, not user name.
Cons: Getting latest top 10 comments of a given user is not a seek anymore as data not stored by user (ie. not sorted by user). So I have to create other index to improve the query performance.
Which way is best way to proceed?
How about insertion and deletion? These operations are allowed. But read is frequent.
User can't modify their comments.
I tested both tables with 1.1M rows. Here is the result:
table_name rows reserved data index_size unused
comments2 1079892 99488 KB 62824 KB 36576 KB 88 KB (PK: com_id Second Index on (user_name, com_id))
comments1 1079892 82376 KB 82040 KB 328 KB 8 KB (PK: user_name, no other indices)
--------------------------------------------------------------------
diff: same rows 17112KB -19216KB 36,248KB 80KB
So the table with com_id as PK is using 36MB extra disk space just for the 2 index
The select top query on both table using SEEK, but table with com_id as PK is slower
But insertion is slightly faster when I have com_id as PK
Any comments?
I would use the Comment ID as the Primary Key for the table. If you are going to have a lot of queries that use the Comment ID and the User name, its probably simpler just to add an Index on those fields.
I would not use User name in a PK as it may change, creating cascade update issues later.
Also, concatenating those two into the PK creates a large(r) PK that might have to be passed to other tables as a FK. I try to keep PK that appear as FKs as small as possible, unless I know I will want all the PK of the contributing tables in one large key for speed of query.
Comment id should be fine.
You may need to create an additional index for fast searching on comment id and user name.
Will you be doing more insertions/updates or queries? if query intensive, then the index is not an issue.
Are you sure that you have that CREATE TABLE statement correct? You're using [Channel] in the PK definition, and I don't see that as a column. Did you mean [User].
Do you have a user table someplace? If so, you might save a lot of overhead by keying that on an integer value and putting UserID into the comments table, rather than User.
I would PK on the CommentID and then add a non-clustered index on [UserID, CommentID]. That gives you immediate access to a comment by ID (for deleting, etc) without having to involve the UserID value in the WHERE clause; and it provides quick access to the user's comments. I do not, however, tend to work with table of the size you anticipate.
As a rule of thumb, always choose the narrowest PK. Then, to improve performance, you may want to use an integer based User_id, instead of a varchar, and add an index for both columns.
The best approach will depends on the number of users, if you have just a few users the commet_id user_id pk could be better (additionally, parttition by user would be an option); in the other hand, if the number of users are high, a combined Pk will be useless.
My initial approach would be to make CommentID alone the PK, maybe in descending order so you don't have to do any reordering on select. Then put an index on UserID.
If you use the concatenated key, consider switching CommentID to desc.
Related
So I'm importing large JSON-data and translating it to a SQLite server. I'm using transactions for the inserts, and I've tried tables using NULL or not using NULL to check the difference in performance.
When I had tables in SQLite that looked like this:
CREATE TABLE comments(
id TEXT,
author TEXT,
body TEXT,
score INTEGER,
created_utc TEXT
);
The import time was really slow, and searching in the table (e.g. select * from comments where author = 'blabla') was also slow.
When instead using a table with specified NULL or NOT NULL constraints, the import time and search time went much faster (from 2000 seconds to 600 seconds).
CREATE TABLE comments(
id TEXT PRIMARY KEY,
author TEXT NOT NULL,
body TEXT NULL,
score INTEGER NULL,
created_utc TEXT NULL
);
Does anyone know why this change in performance happened when using NULL or NOT NULL?
As per my comment, adding PRIMARY KEY may be a major factor regarding improvements for searches. Although it may have a negative impact on inserts as the that index will have to be maintained.
Coding NULL makes no difference as it just leaves the NOT NULL flag as 0, so that can be ignored.
Coding NOT NULL may result in fewer inserts due to the constraint being met and could thus result in a performance improvement.
Regarding PRIMARY INDEX, coding this as anything other than INTEGER PRIMARY KEY or INTEGER PRIMARY KEY AUTOINCREMENT will result in a subsequent index being created.
That is, if a table is not defined with WITHOUT ROWID then SQLite creates the "REAL" primary index with a normally invisible column named rowid. This uniquely identifies a row. (Try SELECT rowid FROM comments)
As such, in both scenarios there is an index based upon the rowid. For all intents and purposes this will be the order in which the rows were inserted.
In the second scenario there will be 2 indexes the "REAL" primary index based upon the rowid and the defined primary index based upon the id column. There would be some impact on inserts due to the 2nd index needing to be maintained.
So say you search the id column for id x, in the first table it will be relatively slow as it's got to search according to rowid order, it's all it has. However, adding the index according to id and the search is going to be favourable because that index (of the 2 available) is the one the search would likely be based upon.
Note the above is a pretty simplistic overview it doesn't consider The SQLite Query Planner which may be of interest. The ANALYZE statement may also be of interest.
I have a basic reverse lookup table in which the ids are already sorted in ascending numerical order:
id INT NOT NULL,
value INT NOT NULL
The ids are not unique; each id has from 5 to 25,000 associated values. Each id is independent, i.e., no relationships between the ids.
The table is static. Read only, no inserts or updates ever. The table has 100-200 million records. The database itself will be around 7-12gb. Sqlite.
I will do frequent lookups in this table and want the fastest response time for each query. Lookups are one-direction only, unordered, and always of the form:
SELECT value WHERE id IN (x,y,z)
What advantages does the pre-sorted order give me in terms of database efficiency? What should I do differently than I would with typical unordered tables? How do I tell sql that it's an ordered list?
What about indices: is it necessary or even helpful to create an index on id?
[Updated for clustered comment thanks to Gordon Linoff]. As far as I can tell, sqlite doesn't support clustered indices directly. The wiki says: "Are [clustered indices] supported? No, but if you use INTEGER PRIMARY KEY it acts as a clustered index." In my situation, the column id is not unique...
Assuming that space is not an issue, you should create an index on (id, value). This should be sufficient for your purposes.
However, if the table is static, then I would recommend that you create a clustered index when you create the table. The index would have the same keys, (id, value).
If the table happens to be sorted, the database does not know about this, so you'd still need an index.
It is a better idea to use a WITHOUT ROWID table (what other DBs call a clustered index):
CREATE TABLE MyLittleLookupTable (
id INTEGER,
value INTEGER,
PRIMARY KEY (id, value)
) WITHOUT ROWID;
In Microsoft SQL Server, when creating tables, are there any downsides to using a unique constraint on a column even though you don't really need it to be unique?
An example would be descriptions for say a role in a user management system:
CREATE TABLE Role
(
ID TINYINT PRIMARY KEY NOT NULL IDENTITY(0, 1),
Title CHARACTER VARYING(32) NOT NULL UNIQUE,
Description CHARACTER VARYING(MAX) NOT NULL UNIQUE
)
My fear is that validating this constraint when doing frequent insertions in other tables will be a very time consuming process. I am unsure as to how this constraint is validated, but I feel like it could be done in a very efficient way or as a linear comparison.
Your fear becomes true: UNIQUE constraint are implemented as indices, and this is time and space consuming.
So, whenever you insert a new row, the database have to update the table, and also one index for each unique constraint.
So, according to you:
using a unique constraint on a column even though you don't really need it to be unique
the answer is no, don't use it. there are time and space downsides.
Your sample table would need a clustered index for the Id, and 2 extra indices, one for each unique constraint. This takes up space, and time to update the 3 indices on the inserts.
This would only be justified if you made queries filtering by those fields.
BY THE WAY:
The original post sample table have several flaws:
that syntax is not SQL Server syntax (and you tagged this as SQL Server)
you cannot create an index in a varchar(max) column
If you correct the syntax and create this table:
CREATE TABLE Role
(
ID tinyint PRIMARY KEY NOT NULL IDENTITY(0, 1),
Title varchar(32) NOT NULL UNIQUE,
Description varchar(32) NOT NULL UNIQUE
)
You can then execute sp_help Role and you'll find the 3 indices.
The database creates an index which backs up the UNIQUE constraint, so it should be very low-cost to do the uniqueness check.
http://msdn.microsoft.com/en-us/library/ms177420.aspx
The Database Engine automatically creates a UNIQUE index to enforce the uniqueness requirement of the UNIQUE constraint. Therefore, if an attempt to insert a duplicate row is made, the Database Engine returns an error message that states the UNIQUE constraint has been violated and does not add the row to the table. Unless a clustered index is explicitly specified, a unique, nonclustered index is created by default to enforce the UNIQUE constraint.
Is it typically a good practice to constrain it if you know the data
will always be unique but it doesn't necessarily need to be unique for
the application to function correctly?
My question to you: would it make sense for two roles to have different titles but the same description? e.g.
INSERT INTO Role ( Title , Description )
VALUES ( 'CEO' , 'Senior manager' ),
( 'CTO' , 'Senior manager' );
To me it would seem to devalue the use of the description; if there were many duplications then it might make more sense to do something more like this:
INSERT INTO Role ( Title )
VALUES ( 'CEO' ),
( 'CTO' );
INSERT INTO SeniorManagers ( Title )
VALUES ( 'CEO' ),
( 'CTO' );
But then again you are not expecting duplicates.
I assume this is a low activity table. You say you fear validating this constraint when doing frequent insertions in other tables. Well, that will not happen (unless there is a trigger we cannot see that might update this table when another table is updated).
Personally, I would ask the designer (business analyst, whatever) to justify not applying a unique constraint. If they cannot then I would impose the unqiue constraint based on common sense. As is usual for such a text column, I would also apply CHECK constraints e.g. to disallow leading/trailing/double spaces, zero-length string, etc.
On SQL Server, the data type tinyint only gives you 256 distinct values. No matter what you do outside of the id column, you're not going to end up with a very big table. It will surely perform quickly even with a dozen indexed columns.
You usually need at least one unique constraint besides the surrogate key, though. If you don't have one, you're liable to end up with data like this.
1 First title First description
2 First title First description
3 First title First description
...
17 Third title Third description
18 First title First description
Tables that permit data like that are usually wrong. Any table that uses foreign key references to this table won't be able to report correctly, say, the number of "First title" used.
I'd argue that allowing multiple, identical titles for roles in a user management system is a design error. I'd probably argue that "title" is a really bad name for that column, too.
So, I have this funny requirement of creating an index on a table only on a certain set of rows.
This is what my table looks like:
USER: userid, friendid, created, blah0, blah1, ..., blahN
Now, I'd like to create an index on:
(userid, friendid, created)
but only on those rows where userid = friendid. The reason being that this index is only going to be used to satisfy queries where the WHERE clause contains "userid = friendid". There will be many rows where this is NOT the case, and I really don't want to waste all that extra space on the index.
Another option would be to create a table (query table) which is populated on insert/update of this table and create a trigger to do so, but again I am guessing an index on that table would mean that the data would be stored twice.
How does mysql store Primary Keys? I mean is the table ordered on the Primary Key or is it ordered by insert order and the PK is like a normal unique index?
I checked up on clustered indexes (http://dev.mysql.com/doc/refman/5.0/en/innodb-index-types.html), but it seems only InnoDB supports them. I am using MyISAM (I mention this because then I could have created a clustered index on these 3 fields in the query table).
I am basically looking for something like this:
ALTER TABLE USERS ADD INDEX (userid, friendid, created) WHERE userid=friendid
Regarding the conditional index:
You can't do this. MySQL has no such thing.
Regarding the primary key:
It depends on the storage engine. MySQL does not define how data is stored or retrieved, that's left up to the storage engine.
MyISAM does not enforce any order on how rows are stored; they're appended to the end of the table but gaps from deleting can be reused and UPDATE queries can leave things out of order even without any DELETEs.
InnoDB stores rows in order of their primary keys.
hard to tell what you're actually trying to do here (why would a user need to be his own friend?) but it seems to me a simple rethinking of your database schema would resolve this problem.
table 1: USER: userid, created, blah0, blah1, ...,
table 2: userIsFriend (user1,user2,...)
and just do your indexing on table 2 (whose elements presumably have foreign key constraint on table 1)
btw you should probably be using InnoDB if you want to do anything semi-serious with mySQL anyway, IMHO.
Why does INDEX creation statement have UNIQUE argument?
As I understand, the non-clustered index contains a bookmark, a pointer to a row, which should be unique to distinguish even non-unique rows,
so insuring non-clustered index to be unique ?
Correct?
So, do I understand that no-unique index can be only on clustered table? since
"A clustered index on a view must be unique" [1]
Since "The bottom, or leaf, level of the clustered index contains the actual data rows of the table" [1], do I understand correctly that the same effect as UNIUE on clustered index can be achieved by unique constraint on (possibly all or part of) columns of a table [2]?
Then, what does bring UNIQUE argument for index?
except confusion to basic concepts definitions [3]
Update:
This is again the same pitfall - explaining something already explained many times based on undefined terms converting all explanation to never-ending guessing game.
Please see my subquestion [4] which is really re-wording of this same question here.
Update2:
The problem is in ambiguous, lacking definitions or improper use of terms in improper contexts. If index is defined as structure serving to (find and) identify/point to real data, then non-unique or NULL indexes do not make any sense. Bye
Cited:
[1]
CREATE INDEX (Transact-SQL)
http://msdn.microsoft.com/en-us/library/ms188783.aspx
[2]
CREATE TABLE (Transact-SQL)
http://msdn.microsoft.com/en-us/library/ms174979.aspx
[3]
Unique index or unique key?
Unique index or unique key?
[4]
what is index and can non-clustered index be non-unique?
what is index and can non-clustered index be non-unique?
While a non-unique index is sufficient to distinguish between rows (as you said), the UNIQUE index serves as a constraint: it will prevent duplicates from being entered into the database - where "duplicates" are rows containing the same data in the indexed columns.
Example:
Firstname | Lastname | Login
================================
Joe | Smith | joes
Joe | Taylor | joet
Susan | Smith | susans
Let's assume that login names are by default generated from first name + first letter of last name.
What happens when we try to add Joe Sciavillo to the database? Normally, the system would happily generate loginname joes and insert (Joe,Sciavillo,joes). Now we'd have two users with the same username - probably a Bad Thing.
Now let's say we have a UNIQUE index on Login column - the database will check that no other row with the same data already exists, before it allows inserting the new row. In other words, the attempt to insert another joes will be rejected, because that data wouldn't be unique in that row any more.
Of course, you could have unique indexes on multiple columns, in which case the combination of data would have to be unique (e.g. a unique index on Firstname,Lastname will happily accept a row with (Joe,Badzhanov), as the combination is not in the table yet, but would reject a second row with (Joe,Smith))
The UNIQUE index clause is really just a quirk of syntax in SQL Server and some other DBMSs. In Standard SQL, uniqueness constraints are implemented through the use of the PRIMARY KEY and UNIQUE CONSTRAINT syntax, not through indexes (there are no indexes in standard SQL).
The mechanism SQL Server uses internally to implement uniqueness constraints is called a unique index. A unique index gets created automatically for you whenever you create a PRIMARY KEY or UNIQUE constraint. For reasons best known to the SQL Server development team they decided to expose the UNIQUE keyword as part of the CREATE INDEX syntax, even though the constraint syntax does the same job.
In the interests of clarity and standards support I would recommend you avoid creating UNIQUE indexes explicitly wherever possible. Use the PRIMARY KEY or UNQIUE constraint syntax instead.
The UNIQUE clause specifies that the values in the column(s) must be unique across the table, essentially adding a unique constraint. A clustered index on a table specifies that the ordering of the rows in the table will be the same as the index. A non-clustered index does not change the physical ordering, which is why it is OK to have multiple non-clustered but only one clustered index. You can have unique or non-unique clustered and non-clustered indexes on a table.
I think the underlying question is: what is the difference between unique and non-unique indexes?
The answer is that entries in unique indexes can each only point to a single row, while entries in non-unique indexes can point to many rows.
For example, consider an order item table:
ORDER_NO INTEGER
LINE_NO INTEGER
PRODUCT_NO INTEGER
QUANTITY DECIMAL
- with a unique index on ORDER_NO and LINE_NO, and a non-unique index on PRODUCT_NO.
For a single combination of ORDER_NO and LINE_NO there can only be one entry in the table, while for a single value of PRODUCT_NO there can be many entries in the table (because there will be many entries for that value in the index).