Given the following:
-- This table will have roughly 14 million records
CREATE TABLE IdMappings
(
Id int IDENTITY(1,1) NOT NULL,
OldId int NOT NULL,
NewId int NOT NULL,
RecordType varchar(80) NOT NULL, -- 15 distinct values, will never increase
Processed bit NOT NULL DEFAULT 0,
CONSTRAINT pk_IdMappings
PRIMARY KEY CLUSTERED (Id ASC)
)
CREATE UNIQUE INDEX ux_IdMappings_OldId ON IdMappings (OldId);
CREATE UNIQUE INDEX ux_IdMappings_NewId ON IdMappings (NewId);
and this is the most common query run against the table:
WHILE #firstBatchId <= #maxBatchId
BEGIN
-- the result of this is used to insert into another table:
SELECT
NewId, -- and lots of non-indexed columns from SOME_TABLE
FROM
IdMappings map
INNER JOIN
SOME_TABLE foo ON foo.Id = map.OldId
WHERE
map.Id BETWEEN #firstBatchId AND #lastBatchId
AND map.RecordType = #someRecordType
AND map.Processed = 0
-- We only really need this in case the user kills the binary or SQL Server service:
UPDATE IdMappings
SET Processed = 1
WHERE map.Id BETWEEN #firstBatchId AND #lastBatchId
AND map.RecordType = #someRecordType
SET #firstBatchId += 4999
SET #lastBatchId += 4999
END
What are the best indices to add? I figure Processed isn't worth indexing since it only has 2 values. Is it worth indexing RecordType since there are only about 15 distinct values? How many distinct values will a column likely have before we consider indexing it?
Is there any advantage in a composite key if some of the fields are in the WHERE and some are in a JOIN's ON condition? For example:
CREATE INDEX ix_IdMappings_RecordType_OldId
ON IdMappings (RecordType, OldId)
... if I wanted both these fields indexed (I'm not saying I do), does this composite key gain any advantage since both columns don't appear together in the same WHERE or same ON?
Insert time into IdMappings isn't really an issue. After we insert all records into the table, we don't need to do so again for months if ever.
Related
In legacy code I found an index as follows:
CREATE CLUSTERED INDEX ix_MyTable_foo ON MyTable
(
id ASC,
name ASC
)
If I understand correctly, this index would be useful for querying on column id alone, or id and name. Do I have that correct?
So it would possibly improve retrieval of records by doing:
select someColumn from MyTable where id = 4
But it would do nothing for this query:
select someColumn from MyTable where name = 'test'
Yes, you are right. But in case when you have table with many columns:
A
B
C
D
..
F
where your primary key index is for example (A), if you have second index like (B,C), the engine may decide to use it if you are using query like this:
CREATE TABLE dbo.StackOverflow
(
A INT
,B INT
,C INT
,D INT
,E INT
,PRIMARY KEY (A)
,CONSTRAINT IX UNIQUE(B,C)
)
SELECT A
,C
FROM dbo.StackOverflow
WHERE C = 0;
So, if an index can be used as covering, the engine may use it even you are not interested in some of the columns, because it will decide that less work (less reads) will be performed.
In your case, it seems that a PK on id column is a better choice with combination with second index on the name column.
I have a SQL table with a column called [applied], only one row from all rows can be applied ( have the value of 1) all other rows should have the value 0
Is there a check constraint that i can write to force such a case?
If you use null instead of 0, it will be much easier.
Have a CHECK constraint to make sure the (non-null) value = 1. Also have a UNIQUE constraint to only allow a single value 1.
create table testtable (
id int primary key,
applied int,
constraint applied_unique unique (applied),
constraint applied_eq_1 check (applied = 1)
);
Core ANSI SQL, i.e. expected to work with any database.
Most databases support filtered indexes:
create unique index unq_t_applied on t(applied) where applied = 1;
To know exactly how to write trigger that will help you an info of a database you use is needed.
You wil need a trigger where this will be your test control:
SELECT COUNT(APPLIED)
FROM TEST
WHERE APPLIED = 1
If it is > 0 then do not allow insert else allow.
While this can be done with triggers and constraints, they probably require an index. Instead, consider a join table.
create table things_applied (
id smallint primary key default 1,
thing_id bigint references things(id) not null,
check(id = 1)
);
Because the primary key is unique, there can only ever be one row.
The first is activated with an insert.
insert into things_applied (thing_id) values (1);
Change it by updating the row.
update things_applied set thing_id = 2;
To deactivate completely, delete the row.
delete things_applied;
To find the active row, join with the table.
select t.*
from things t
join things_applied ta on ta.thing_id = t.id
To check if it's active at all, count the rows.
select count(id) as active
from things_applied
Try it.
I am trying to optimize the query I use for fetching paginated data from database with large data sets.
My schema looks like this:
CREATE TABLE users (
user_id TEXT PRIMARY KEY,
name TEXT,
custom_fields TEXT
);
CREATE TABLE events (
event_id TEXT PRIMARY KEY,
organizer_id TEXT NOT NULL REFERENCES users(user_id) ON DELETE SET NULL ON UPDATE CASCADE,
name TEXT NOT NULL,
type TEXT NOT NULL,
start_time INTEGER,
duration INTEGER
-- more columns here, omitted for the sake of simplicity
);
CREATE INDEX events_organizer_id_start_time_idx ON events(organizer_id, start_time);
CREATE INDEX events_organizer_id_type_idx ON events(organizer_id, type);
CREATE INDEX events_organizer_id_type_start_time_idx ON events(organizer_id, type, start_time);
CREATE INDEX events_type_start_time_idx ON events(type, start_time);
CREATE INDEX events_start_time_desc_idx ON events(start_time DESC);
CREATE INDEX events_start_time_asc_idx ON events(IFNULL(start_time, 253402300800) ASC);
CREATE TABLE event_participants (
participant_id TEXT NOT NULL REFERENCES users(user_id) ON DELETE CASCADE ON UPDATE CASCADE,
event_id TEXT NOT NULL REFERENCES events(event_id) ON DELETE CASCADE ON UPDATE CASCADE,
role INTEGER NOT NULL DEFAULT 0,
UNIQUE (participant_id, event_id) ON CONFLICT REPLACE
);
CREATE INDEX event_participants_participant_id_event_id_idx ON event_participants(participant_id, event_id);
CREATE INDEX event_participants_event_id_idx ON event_participants(event_id);
CREATE TABLE event_tag_maps (
event_id TEXT NOT NULL REFERENCES events(event_id) ON DELETE CASCADE ON UPDATE CASCADE,
tag_id TEXT NOT NULL,
PRIMARY KEY (event_id, tag_id) ON CONFLICT IGNORE
);
CREATE INDEX event_tag_maps_event_id_tag_id_idx ON event_tag_maps(event_id, tag_id);
Where in events table I have around 1,500,000 entries, and around 2,000,000 in event_participants.
Now, a typical query would look something like:
SELECT
EVTS.event_id,
EVTS.type,
EVTS.name,
EVTS.time,
EVTS.duration
FROM events AS EVTS
WHERE
EVTS.organizer_id IN(
'f39c3bb1-3ee3-11e6-a0dc-005056c00008',
'4555e70f-3f1d-11e6-a0dc-005056c00008',
'6e7e33ae-3f1c-11e6-a0dc-005056c00008',
'4850a6a0-3ee4-11e6-a0dc-005056c00008',
'e06f784c-3eea-11e6-a0dc-005056c00008',
'bc6a0f73-3f1d-11e6-a0dc-005056c00008',
'68959fb5-3ef3-11e6-a0dc-005056c00008',
'c4c96cf2-3f1a-11e6-a0dc-005056c00008',
'727e49d1-3f1b-11e6-a0dc-005056c00008',
'930bcfb6-3f09-11e6-a0dc-005056c00008')
AND EVTS.type IN('Meeting', 'Conversation')
AND(
EXISTS (
SELECT 1 FROM event_tag_maps AS ETM WHERE ETM.event_id = EVTS.event_id AND
ETM.tag_id IN ('00000000-0000-0000-0000-000000000000', '6ae6870f-1aac-11e6-aeb9-005056c00008', '6ae6870c-1aac-11e6-aeb9-005056c00008', '1f6d3ccb-eaed-4068-a46b-ec2547fec1ff'))
OR NOT EXISTS (
SELECT 1 FROM event_tag_maps AS ETM WHERE ETM.event_id = EVTS.event_id)
)
AND EXISTS (
SELECT 1 FROM event_participants AS EPRTS
WHERE
EVTS.event_id = EPRTS.event_id
AND participant_id NOT IN('79869516-3ef2-11e6-a0dc-005056c00008', '79869515-3ef2-11e6-a0dc-005056c00008', '79869516-4e18-11e6-a0dc-005056c00008')
)
ORDER BY IFNULL(EVTS.start_time, 253402300800) ASC
LIMIT 100 OFFSET #Offset;
Also, for fetching the overall count of the query-matching items, I would use the above query with count(1) instead of the columns and without the ORDER BY and LIMIT/OFFSET clauses.
I experience two main problems here:
1) The performance drastically decreases as I increase the #Offset value. The difference is very significant - from being almost immediate to a number of seconds.
2) The count query takes a long time (number of seconds) and produces the following execution plan:
0|0|0|SCAN TABLE events AS EVTS
0|0|0|EXECUTE LIST SUBQUERY 1
0|0|0|EXECUTE LIST SUBQUERY 1
0|0|0|EXECUTE CORRELATED SCALAR SUBQUERY 1
1|0|0|SEARCH TABLE event_tag_maps AS ETM USING COVERING INDEX event_tag_maps_event_id_tag_id_idx (event_id=? AND tag_id=?)
1|0|0|EXECUTE LIST SUBQUERY 2
0|0|0|EXECUTE CORRELATED SCALAR SUBQUERY 2
2|0|0|SEARCH TABLE event_tag_maps AS ETM USING COVERING INDEX event_tag_maps_event_id_tag_id_idx (event_id=?)
0|0|0|EXECUTE CORRELATED SCALAR SUBQUERY 3
3|0|0|SEARCH TABLE event_participants AS EPRTS USING INDEX event_participants_event_id_idx (event_id=?)
Here I don't understand why the full scan is performed instead of an index scan.
Additional info and SQLite settings used:
I use System.Data.SQLite provider (have to, because of custom functions support)
Page size = cluster size (4096 in my case)
Cache size = 100000
Journal mode = WAL
Temp store = 2 (memory)
No transaction is open for the query
Is there anything I could do to change the query/schema or settings in order to get as much performance improvement as possible?
I have this table:
CREATE TABLE `search_engine_rankings` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`keyword_id` int(11) DEFAULT NULL,
`search_engine_id` int(11) DEFAULT NULL,
`total_results` int(11) DEFAULT NULL,
`rank` int(11) DEFAULT NULL,
`url` varchar(255) DEFAULT NULL,
`created_at` datetime DEFAULT NULL,
`updated_at` datetime DEFAULT NULL,
`indexed_at` date DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `unique_ranking` (`keyword_id`,`search_engine_id`,`rank`,`indexed_at`),
KEY `search_engine_rankings_search_engine_id_fk` (`search_engine_id`),
CONSTRAINT `search_engine_rankings_keyword_id_fk` FOREIGN KEY (`keyword_id`) REFERENCES `keywords` (`id`) ON DELETE CASCADE,
CONSTRAINT `search_engine_rankings_search_engine_id_fk` FOREIGN KEY (`search_engine_id`) REFERENCES `search_engines` (`id`) ON DELETE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=244454637 DEFAULT CHARSET=utf8
It has about 250M rows in production.
When I do:
select id,
rank
from search_engine_rankings
where keyword_id = 19
and search_engine_id = 11
and indexed_at = "2010-12-03";
...it runs very quickly.
When I add the url column (VARCHAR):
select id,
rank,
url
from search_engine_rankings
where keyword_id = 19
and search_engine_id = 11
and indexed_at = "2010-12-03";
...it runs very slowly.
Any ideas?
The first query can be satisfied by the index alone -- no need to read the base table to obtain the values in the Select clause. The second statement requires reads of the base table because the URL column is not part of the index.
UNIQUE KEY `unique_ranking` (`keyword_id`,`search_engine_id`,`rank`,`indexed_at`),
The rows in tbe base table are not in the same physical order as the rows in the index, and so the read of the base table can involve considerable disk-thrashing.
You can think of it as a kind of proof of optimization -- on the first query the disk-thrashing is avoided because the engine is smart enough to consult the index for the values requested in the select clause; it will already have read that index into RAM for the where clause, so it takes advantage of that fact.
Additionally to Tim's answer. An index in Mysql can only be used left-to-right. Which means it can use columns of your index in your WHERE clause only up to the point you use them.
Currently, your UNIQUE index is keyword_id,search_engine_id,rank,indexed_at. This will be able to filter the columns keyword_id and search_engine_id, still needing to scan over the remaining rows to filter for indexed_at
But if you change it to: keyword_id,search_engine_id,indexed_at,rank (just the order). This will be able to filter the columns keyword_id,search_engine_id and indexed_at
I believe it will be able to fully use that index to read the appropriate part of your table.
I know it's an old post but I was experiencing the same situation and I didn't found an answer.
This really happens in MySQL, when you have varchar columns it takes a lot of time processing. My query took about 20 sec to process 1.7M rows and now is about 1.9 sec.
Ok first of all, create a view from this query:
CREATE VIEW view_one AS
select id,rank
from search_engine_rankings
where keyword_id = 19000
and search_engine_id = 11
and indexed_at = "2010-12-03";
Second, same query but with an inner join:
select v.*, s.url
from view_one AS v
inner join search_engine_rankings s ON s.id=v.id;
TLDR: I solved this by running optimize on the table.
I experienced the same just now. Even lookups on primary key and selecting just some few rows was slow. Testing a bit, I found it not to be limited to the varchar column, selecting an int also took considerable amounts of time.
A query roughly looking like this took around 3s:
select someint from mytable where id in (1234, 12345, 123456).
While a query roughly looking like this took <10ms:
select count(*) from mytable where id in (1234, 12345, 123456).
The approved answer here is to just make an index spanning someint also, and it will be fast, as mysql can fetch all information it needs from the index and won't have to touch the table. That probably works in some settings, but I think it's a silly workaround - something is clearly wrong, it should not take three seconds to fetch three rows from a table! Besides, most applications just does a "select * from mytable", and doing changes at the application side is not always trivial.
After optimize table, both queries takes <10ms.
How could I set a constraint on a table so that only one of the records has its isDefault bit field set to 1?
The constraint is not table scope, but one default per set of rows, specified by a FormID.
Use a unique filtered index
On SQL Server 2008 or higher you can simply use a unique filtered index
CREATE UNIQUE INDEX IX_TableName_FormID_isDefault
ON TableName(FormID)
WHERE isDefault = 1
Where the table is
CREATE TABLE TableName(
FormID INT NOT NULL,
isDefault BIT NOT NULL
)
For example if you try to insert many rows with the same FormID and isDefault set to 1 you will have this error:
Cannot insert duplicate key row in object 'dbo.TableName' with unique
index 'IX_TableName_FormID_isDefault'. The duplicate key value is (1).
Source: http://technet.microsoft.com/en-us/library/cc280372.aspx
Here's a modification of Damien_The_Unbeliever's solution that allows one default per FormID.
CREATE VIEW form_defaults
AS
SELECT FormID
FROM whatever
WHERE isDefault = 1
GO
CREATE UNIQUE CLUSTERED INDEX ix_form_defaults on form_defaults (FormID)
GO
But the serious relational folks will tell you this information should just be in another table.
CREATE TABLE form
FormID int NOT NULL PRIMARY KEY
DefaultWhateverID int FOREIGN KEY REFERENCES Whatever(ID)
From a normalization perspective, this would be an inefficient way of storing a single fact.
I would opt to hold this information at a higher level, by storing (in a different table) a foreign key to the identifier of the row which is considered to be the default.
CREATE TABLE [dbo].[Foo](
[Id] [int] NOT NULL,
CONSTRAINT [PK_Foo] PRIMARY KEY CLUSTERED
(
[Id] ASC
) ON [PRIMARY]
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[DefaultSettings](
[DefaultFoo] [int] NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[DefaultSettings] WITH CHECK ADD CONSTRAINT [FK_DefaultSettings_Foo] FOREIGN KEY([DefaultFoo])
REFERENCES [dbo].[Foo] ([Id])
GO
ALTER TABLE [dbo].[DefaultSettings] CHECK CONSTRAINT [FK_DefaultSettings_Foo]
GO
You could use an insert/update trigger.
Within the trigger after an insert or update, if the count of rows with isDefault = 1 is more than 1, then rollback the transaction.
CREATE VIEW vOnlyOneDefault
AS
SELECT 1 as Lock
FROM <underlying table>
WHERE Default = 1
GO
CREATE UNIQUE CLUSTERED INDEX IX_vOnlyOneDefault on vOnlyOneDefault (Lock)
GO
You'll need to have the right ANSI settings turned on for this.
I don't know about SQLServer.But if it supports Function-Based Indexes like in Oracle, I hope this can be translated, if not, sorry.
You can do an index like this on suposed that default value is 1234, the column is DEFAULT_COLUMN and ID_COLUMN is the primary key:
CREATE
UNIQUE
INDEX only_one_default
ON my_table
( DECODE(DEFAULT_COLUMN, 1234, -1, ID_COLUMN) )
This DDL creates an unique index indexing -1 if the value of DEFAULT_COLUMN is 1234 and ID_COLUMN in any other case. Then, if two columns have DEFAULT_COLUMN value, it raises an exception.
The question implies to me that you have a primary table that has some child records and one of those child records will be the default record. Using address and a separate default table here is an example of how to make that happen using third normal form. Of course I don't know if it's valuable to answer something that is so old but it struck my fancy.
--drop table dev.defaultAddress;
--drop table dev.addresses;
--drop table dev.people;
CREATE TABLE [dev].[people](
[Id] [int] identity primary key,
name char(20)
)
GO
CREATE TABLE [dev].[Addresses](
id int identity primary key,
peopleId int foreign key references dev.people(id),
address varchar(100)
) ON [PRIMARY]
GO
CREATE TABLE [dev].[defaultAddress](
id int identity primary key,
peopleId int foreign key references dev.people(id),
addressesId int foreign key references dev.addresses(id))
go
create unique index defaultAddress on dev.defaultAddress (peopleId)
go
create unique index idx_addr_id_person on dev.addresses(peopleid,id);
go
ALTER TABLE dev.defaultAddress
ADD CONSTRAINT FK_Def_People_Address
FOREIGN KEY(peopleID, addressesID)
REFERENCES dev.Addresses(peopleId, id)
go
insert into dev.people (name)
select 'Bill' union
select 'John' union
select 'Harry'
insert into dev.Addresses (peopleid, address)
select 1, '123 someplace' union
select 1,'work place' union
select 2,'home address' union
select 3,'some address'
insert into dev.defaultaddress (peopleId, addressesid)
select 1,1 union
select 2,3
-- so two home addresses are default now
-- try adding another default address to Bill and you get an error
select * from dev.people
join dev.addresses on people.id = addresses.peopleid
left join dev.defaultAddress on defaultAddress.peopleid = people.id and defaultaddress.addressesid = addresses.id
insert into dev.defaultaddress (peopleId, addressesId)
select 1,2
GO
You could do it through an instead of trigger, or if you want it as a constraint create a constraint that references a function that checks for a row that has the default set to 1
EDIT oops, needs to be <=
Create table mytable(id1 int, defaultX bit not null default(0))
go
create Function dbo.fx_DefaultExists()
returns int as
Begin
Declare #Ret int
Set #ret = 0
Select #ret = count(1) from mytable
Where defaultX = 1
Return #ret
End
GO
Alter table mytable add
CONSTRAINT [CHK_DEFAULT_SET] CHECK
(([dbo].fx_DefaultExists()<=(1)))
GO
Insert into mytable (id1, defaultX) values (1,1)
Insert into mytable (id1, defaultX) values (2,1)
This is a fairly complex process that cannot be handled through a simple constraint.
We do this through a trigger. However before you write the trigger you need to be able to answer several things:
do we want to fail the insert if a default exists, change it to 0 instead of 1 or change the existing default to 0 and leave this one as 1?
what do we want to do if the default record is deleted and other non default records are still there? Do we make one the default, if so how do we determine which one?
You will also need to be very, very careful to make the trigger handle multiple row processing. For instance a client might decide that all of the records of a particular type should be the default. You wouldn't change a million records one at a time, so this trigger needs to be able to handle that. It also needs to handle that without looping or the use of a cursor (you really don't want the type of transaction discussed above to take hours locking up the table the whole time).
You also need a very extensive tesing scenario for this trigger before it goes live. You need to test:
adding a record with no default and it is the first record for that customer
adding a record with a default and it is the first record for that customer
adding a record with no default and it is the not the first record for that customer
adding a record with a default and it is the not the first record for that customer
Updating a record to have the default when no other record has it (assuming you don't require one record to always be set as the deafault)
Updating a record to remove the default
Deleting the record with the deafult
Deleting a record without the default
Performing a mass insert with multiple situations in the data including two records which both have isdefault set to 1 and all of the situations tested when running individual record inserts
Performing a mass update with multiple situations in the data including two records which both have isdefault set to 1 and all of the situations tested when running individual record updates
Performing a mass delete with multiple situations in the data including two records which both have isdefault set to 1 and all of the situations tested when running individual record deletes
#Andy Jones gave an answer above closest to mine, but bearing in mind the Rule of Three, I placed the logic directly in the stored proc that updates this table. This was my simple solution. If I need to update the table from elsewhere, I will move the logic to a trigger. The one default rule applies to each set of records specified by a FormID and a ConfigID:
ALTER proc [dbo].[cpForm_UpdateLinkedReport]
#reportLinkId int,
#defaultYN bit,
#linkName nvarchar(150)
as
if #defaultYN = 1
begin
declare #formId int, #configId int
select #formId = FormID, #configId = ConfigID from csReportLink where ReportLinkID = #reportLinkId
update csReportLink set DefaultYN = 0 where isnull(ConfigID, #configId) = #configId and FormID = #formId
end
update
csReportLink
set
DefaultYN = #defaultYN,
LinkName = #linkName
where
ReportLinkID = #reportLinkId