Can "auto_increment" on "sub_groups" be enforced at a database level? - sql

In Rails, I have the following
class Token < ActiveRecord
belongs_to :grid
attr_accessible :turn_order
end
When you insert a new token, turn_order should auto-increment. HOWEVER, it should only auto-increment for tokens belonging to the same grid.
So, take 4 tokens for example:
Token_1 belongs to Grid_1, turn_order should be 1 upon insert.
Token_2 belongs to Grid_2, turn_Order should be 1 upon insert.
If I insert Token_3 to Grid_1, turn_order should be 2 upon insert.
If I insert Token_4 to Grid_2, turn_order should be 2 upon insert.
There is an additional constraint, imagine I execute #Token_3.turn_order = 1, now #Token_1 must automatically set its turn_order to 2, because within these "sub-groups" there can be no turn_order collision.
I know MySQL has auto_increment, I was wondering if there is any logic that can be applied at the DB level to enforce a constraint such as this. Basically auto_incrementing within sub-groups of a query, those sub-groups being based on a foreign key.
Is this something that can be handled at a DB level, or should I just strive for implementing rock-solid constraints at the application layer?

If i understood your question properly then you could use one of the following two methods (innodb vs myisam). Personally, I'd take the innodb road as i'm a fan of clustered indexes which myisam doesnt support and I prefer performance over how many lines of code I need to type, but the decision is yours...
http://dev.mysql.com/doc/refman/5.0/en/innodb-table-and-index.html
Rewriting mysql select to reduce time and writing tmp to disk
full sql script here : http://pastie.org/1259734
innodb implementation (recommended)
-- TABLES
drop table if exists grid;
create table grid
(
grid_id int unsigned not null auto_increment primary key,
name varchar(255) not null,
next_token_id int unsigned not null default 0
)
engine = innodb;
drop table if exists grid_token;
create table grid_token
(
grid_id int unsigned not null,
token_id int unsigned not null,
name varchar(255) not null,
primary key (grid_id, token_id) -- note clustered PK order (innodb only)
)
engine = innodb;
-- TRIGGERS
delimiter #
create trigger grid_token_before_ins_trig before insert on grid_token
for each row
begin
declare tid int unsigned default 0;
select next_token_id + 1 into tid from grid where grid_id = new.grid_id;
set new.token_id = tid;
update grid set next_token_id = tid where grid_id = new.grid_id;
end#
delimiter ;
-- TEST DATA
insert into grid (name) values ('g1'),('g2'),('g3');
insert into grid_token (grid_id, name) values
(1,'g1 t1'),(1,'g1 t2'),(1,'g1 t3'),
(2,'g2 t1'),
(3,'g3 t1'),(3,'g3 t2');
select * from grid;
select * from grid_token;
myisam implementation (not recommended)
-- TABLES
drop table if exists grid;
create table grid
(
grid_id int unsigned not null auto_increment primary key,
name varchar(255) not null
)
engine = myisam;
drop table if exists grid_token;
create table grid_token
(
grid_id int unsigned not null,
token_id int unsigned not null auto_increment,
name varchar(255) not null,
primary key (grid_id, token_id) -- non clustered PK
)
engine = myisam;
-- TEST DATA
insert into grid (name) values ('g1'),('g2'),('g3');
insert into grid_token (grid_id, name) values
(1,'g1 t1'),(1,'g1 t2'),(1,'g1 t3'),
(2,'g2 t1'),
(3,'g3 t1'),(3,'g3 t2');
select * from grid;
select * from grid_token;

My opinion: Rock-solid constraints at the app level. You may get it to work in SQL -- I've seen some people do some pretty amazing stuff. A lot of SQL logic used to be squirreled away in triggers, but I don't see much of that lately.
This smells more like business logic and you absolutely can get it done in Ruby without wrapping yourself around a tree. And... people will be able to see the tests and read the code.

This to me sounds like something you'd want to handle in an after_save method or in an observer. If the model itself doesn't need to be aware of when or how something increments then I'd stick the business logic in the observer. This approach will make the incrementing logic more expressive to other developers and database agnostic.

Related

Having troubles with Identity field of SQL-SERVER

I'm doing a school project about a school theme where I need to create some tables for Students, Classes, Programmes...
I want to add a Group to determined classes with an auto increment in group_id however I wanted the group_id variable to reset if I change any of those attributes(Classes_id,courses_acronym,year_Semesters) how can I reset it every time any of those change??
Here is my table:
CREATE TABLE Classes_Groups(
Classes_id varchar(2),
Group_id INT IDENTITY(1,1),
courses_acronym varchar(4),
year_Semesters varchar(5),
FOREIGN KEY (Classes_id, year_Semesters,courses_acronym) REFERENCES Classes(id,year_Semesters, courses_acronym),
PRIMARY KEY(Classes_id,courses_acronym,year_Semesters,Group_id)
);
Normally, you do not (need to) reset the identity column of a table. An identity column is used to create unique values for every single record in a table.
So you want to generate entries in your groups table based on new entries in your classes table. You might create a trigger on your classes table for that purpose.
Since Group_id is already unique by itself (because of its IDENTITY), you do not need other fields in the primary key at all. Instead, you may create a separate UNIQUE constraint for the combination (Classes_id, courses_acronym, year_Semesters) if you need it.
And if the id field of your classes table is an IDENTITY column too, you could define a primary key in your classes table solely on that id field. And then your foreign key constraint in your new groups table can only include that Classes_id field.)
So much for now. I guess that your database design needs some more additional tuning and tweaking. ;)
where are you setting the values from?, you can have a stored proc and in your query have the columns have an initial value set when stored proc is hit assuming there are values at the beginning
.Then use an IF statement.
declare #initial_Classes_id varchar(2) = --initial value inserted
declare #initial_courses_acronym varchar(4) = --initial value inserted
declare #initial_year_Semesters varchar(5) = --initial value inserted
declare #compare_Classes_id varchar(2) = (select top 1 Classes_id from Classes_Groups order by --PK column desc for last insert); l would add Dateadded and then order with last insert date
declare #compare_courses_acronym varchar(2) = (select top 1 Classes_id from Classes_Groups where Classes_id = #compare_Classes_id);
declare #compare_year_Semesters varchar(2) = (select top 1 Classes_id from Classes_Groups where Classes_id = #compare_Classes_id);
IF (#initial_Classes_id != #compare_Classes_id OR #initial_courses_acronym != #compare_courses_acronym OR #initial_year_Semesters != #compare_year_Semesters)
BEGIN
DBCC CHECKIDENT ('Group_id', RESEED, 1)
Insert into Classes_Groups (courses_acronym,year_Semesters)
values (
courses_acronym,
year_Semesters
)
END
ELSE
BEGIN
Insert into Classes_Groups (courses_acronym,year_Semesters)
values (
courses_acronym,
year_Semesters
)
END
NB: would advice to use int on the primary key. Unless you have a specific purpose of doing so.

One to many to many relationship, with composite keys

In my game, an archetype is a collection of associated traits, an attack type, a damage type, and a resource type. Each piece of data is unique to each
archetype. For example, the Mage archetype might look like the following:
archetype: Mage
attack: Targeted Area Effect
damage: Shock
resource: Mana
trait_defense: Willpower
trait_offense: Intelligence
This is the archetype table in SQLite syntax:
create table archetype
(
archetype_id varchar(16) not null,
attack_id varchar(16) not null,
damage_id varchar(16) not null,
resource_type_id varchar(16) not null,
trait_defense_id varchar(16) not null,
trait_offense_id varchar(16) not null,
archetype_description varchar(128),
constraint pk_archetype primary key (archetype_id),
constraint uk_archetype unique (attack_id, damage_id,
resource_type_id,
trait_defense_id,
trait_offense_id)
);
The primary key should be the complete composite, but I do not want to pass
all the data around to other tables unless necessary. For example, there are
crafting skills associated with each archetype which do not need to know any
other archetype data.
An effect is a combat outcome that can be applied to a friend or foe. An effect has an application type (instant, overtime), a type (buff, debuff, harm, heal, etc.) and a detail describing to which stat the effect applies. It also has most of the archetype data to make each effect unique. Also included is the associated trait used for progress and skill checks. For example, an effect might look like:
apply: Instant
type: Harm
detail: Health
archetype: Mage
attack_id: Targeted Area Effect
damage_id: Shock
resource: Mana
trait_id: Intelligence
This is the effect table in SQLite syntax:
create table effect
(
effect_apply_id varchar(16) not null,
effect_type_id varchar(16) not null,
effect_detail_id varchar(16) not null,
archetype_id varchar(16) not null,
attack_id varchar(16) not null,
damage_id varchar(16) not null,
resource_type_id varchar(16) not null,
trait_id varchar(16),
constraint pk_effect primary key(archetype_id, effect_type_id,
effect_detail_id, effect_apply_id,
attack_id, damage_id, resource_type_id),
constraint fk_effect_archetype_id foreign key(archetype_id, attack_id,
damage_id, resource_type_id)
references archetype (archetype_id, attack_id,
damage_id, resource_type_id)
);
An ability is a container that can hold multiple effects. There is no limit to
the kinds of effects it can hold, e.g. having both Mage and Warrior effects in
the same ability, or even having two of the same effects, is fine. Each effect
in the ability is going to have the archetype data, and the effect data.
Again.
Ability tables in SQLite syntax:
create table ability
(
ability_id varchar(64),
ability_description varchar(128),
constraint pk_ability primary key (ability_id)
);
create table ability_effect
(
ability_effect_id integer primary key autoincrement,
ability_id varchar(64) not null,
archetype_id varchar(16) not null,
effect_type_id varchar(16) not null,
effect_detail_id varchar(16) not null,
effect_apply_id varchar(16) not null,
attack_id varchar(16) not null,
damage_id varchar(16) not null,
resource_type_id varchar(16) not null,
trait_id varchar(16),
constraint fk_ability_effect_ability_id foreign key (ability_id)
references ability (ability_id),
constraint fk_ability_effect_effect_id foreign key (archetype_id,
effect_type_id,
effect_detail_id,
effect_apply_id)
references effect (archetype_id,
effect_type_id,
effect_detail_id,
effect_apply_id)
);
This is basically a one to many to many relationship, so I needed a technical
key to have duplicate effects in the ability_effect table.
Questions:
1) Is there a better way to design these tables to avoid the duplication of
data over these three tables?
2) Should these tables be broken down further?
3) Is it better to perform multiple table lookups to collect all the data? For example, just passing around the archetype_id and doing lookups for the data when necessary (which will be often).
UPDATE:
I actually do have parent tables for attacks, damage, etc. I removed those
tables and their related indexes from the sample to make the question clean,
concise, and focused on my duplicate data issue.
I was trying to avoid each table having both an id and a name, as both would be candidate keys and so having both would be wasted space. I was trying to keep the SQLite database as small as possible. (Hence, the many "varchar(16)"
declarations, which I now know SQLite ignores.) It seems in SQLite having both
values is unavoidable, unless being twice as slow is somehow ok when using the
WITHOUT ROWID option during table creation. So, I will rewrite my database to
use ids and names via the rowid implementation.
Thanks for your input guys!
1) Is there a better way to design these tables to avoid the
duplication of data over these three tables?
and also
2) Should these tables be broken down further?
It would appear so.
It would appear Mage is a unique archtype, as is Warrior. (based upon For example, the Mage archetype might look like the following:).
As such why not make the archtype_id a primary key and then reference the attack type, damage etc from tables for these. i.e. have an attack table and a damage table.
So you could, for example, have something like (simplified for demonstration) :-
DROP TABLE IF EXISTS archtype;
DROP TABLE IF EXISTS attack;
DROP TABLE IF EXISTS damage;
CREATE TABLE IF NOT EXISTS attack (attack_id INTEGER PRIMARY KEY, attack_name TEXT, a_more_columns TEXT);
INSERT INTO attack (attack_name, a_more_columns) VALUES
('Targetted Affect','ta blah'), -- id 1
('AOE','aoe blah'), -- id 2
('Bounce Effect','bounce blah') -- id 3
;
CREATE TABLE IF NOT EXISTS damage (damage_id INTEGER PRIMARY KEY, damage_name TEXT, d_more_columns TEXT);
INSERT INTO damage (damage_name,d_more_columns) VALUES
('Shock','shock blah'), -- id 1
('Freeze','freeze blah'), -- id 2
('Fire','fire blah'), -- id 3
('Hit','hit blah')
;
CREATE TABLE IF NOT EXISTS archtype (id INTEGER PRIMARY KEY, archtype_name TEXT, attack_id_ref INTEGER, damage_id_ref INTEGER, at_more_columns TEXT);
INSERT INTO archtype (archtype_name,attack_id_ref,damage_id_ref,at_more_columns) VALUES
('Mage',1,1,'Mage blah'),
('Warrior',3,4,'Warrior Blah'),
('Dragon',2,3,'Dragon blah'),
('Iceman',2,2,'Iceman blah')
;
SELECT archtype_name, damage_name, attack_name FROM archtype JOIN damage ON damage_id_ref = damage_id JOIN attack ON attack_id_ref = attack_id;
Note that the aliases of rowid have been used for id's rather than the name as these are generally the most efficient.
The data for rowid tables is stored as a B-Tree structure containing one entry for each table row, using the rowid value as the key. This means that retrieving or sorting records by rowid is fast. Searching for a record with a specific rowid, or for all records with rowids within a specified range is around twice as fast as a similar search made by specifying any other PRIMARY KEY or indexed value. SQL As Understood By SQLite - CREATE TABLE- ROWIDs and the INTEGER PRIMARY KEY
A rowid is generated for all rows (unless WITHOUT ROWID is specified), by specifying ?? INTEGER PRIMARY KEY column ?? is an alias of the rowid.
Beware using AUTOINCREMENT, unlike other RDMS's that use this for automatically generating unique id's for rows. SQLite by default creates a unique id (the rowid). The AUTOINCREMENT keyword adds a constraint that ensures that the generated id is larger than the highest existing. To do this requires an additional table sqlite_sequence that has to be maintained and interrogated and as such has overheads. The AUTOINCREMENT keyword imposes extra CPU, memory, disk space, and disk I/O overhead and should be avoided if not strictly needed. It is usually not needed. SQLite Autoincrement
The query at the end will result in :-
Now say you wanted types to have multiple attacks and damages per type then the above could easily be adapted by using many-many relationships by introducing reference/mapping/link tables (all just different names for the same). Such a table will have two columns (sometime other columns for data specific to the distinct reference/map/link) one for the parent (archtype) reference/map/link and the other for the child (attack/damage) referenced/mapped/linked.
e.g. the following could be added :-
DROP TABLE IF EXISTS archtype_attack_reference;
CREATE TABLE IF NOT EXISTS archtype_attack_reference
(aar_archtype_id INTEGER NOT NULL, aar_attack_id INTEGER NOT NULL,
PRIMARY KEY(aar_archtype_id,aar_attack_id))
WITHOUT ROWID;
DROP TABLE IF EXISTS archtype_damage_reference;
CREATE TABLE IF NOT EXISTS archtype_damage_reference
(adr_archtype_id INTEGER NOT NULL, adr_damage_id INTEGER NOT NULL,
PRIMARY KEY(adr_archtype_id,adr_damage_id))
WITHOUT ROWID
;
INSERT INTO archtype_attack_reference VALUES
(1,1), -- Mage has attack Targetted
(1,3), -- Mage has attack Bounce
(3,2), -- Dragon has attack AOE
(2,1), -- Warrior has attack targetted
(2,2), -- Warrior has attack AOE
(4,2), -- Iceman has attack AOE
(4,3) -- Icemane has attack Bounce
;
INSERT INTO archtype_damage_reference VALUES
(1,1),(1,3), -- Mage can damage with Shock and Freeze
(2,4), -- Warrior can damage with Hit
(3,3),(3,4), -- Dragon can damage with Fire and Hit
(4,2),(4,4) -- Iceman can damage with Freeze and Hit
;
SELECT archtype_name, attack_name,damage_name FROM archtype
JOIN archtype_attack_reference ON archtype_id = aar_archtype_id
JOIN archtype_damage_reference ON archtype_id = adr_archtype_id
JOIN attack ON aar_attack_id = attack_id
JOIN damage ON adr_damage_id = damage_id
;
The query results in :-
With a slight change the above query could even be used to perform a random attack e.g. :-
SELECT archtype_name, attack_name,damage_name FROM archtype
JOIN archtype_attack_reference ON archtype_id = aar_archtype_id
JOIN archtype_damage_reference ON archtype_id = adr_archtype_id
JOIN attack ON aar_attack_id = attack_id
JOIN damage ON adr_damage_id = damage_id
ORDER BY random() LIMIT 1 -- ADDED THIS LINE
;
You could get :-
Another time you might get :-
3) Is it better to perform multiple table lookups to collect all the
data? For example, just passing around the archetype_id and doing
lookups for the data when necessary (which will be often).
That's pretty hard to say. You may initially think gather all the data once and keep it in memory say as an object. However, at times the underlying data may well already be in memory due to it being cached. Perhaps it could be better to utilise part of each. So I believe the answer is, you will need to test various scenarios.
I would probably avoid those composite primary keys.
And use the more commonly used integer with an autoincrement.
Then add the unique or non-unique composite indexes where needed.
Although i.m.h.o it's not always a bad idea to use a short CHAR or VARCHAR as the primary key in some cases. Mostly when easy to understand abbreviations can be used.
An example. Suppose you have a reference table for Countries. With a primary key on the 2 character CountryCode. Then when querying a table with a foreign key on that CountryCode, then for the human mind it's way easier to understand 'US' than some integer. Even without joining to Countries you'll probably know what Country is referenced.
So here are your tables with a slightly different layout.
create table archetype
(
archetype_id integer primary key autoincrement,
attack_id varchar(16) not null,
damage_id varchar(16) not null,
resource_type_id varchar(16) not null,
trait_defense_id varchar(16) not null,
trait_offense_id varchar(16) not null,
archetype_description varchar(128),
constraint uk_archetype unique (attack_id, damage_id,
resource_type_id,
trait_defense_id,
trait_offense_id)
);
create table effect
(
effect_id integer primary key autoincrement,
archetype_id integer not null, -- FK archetype
effect_apply_id varchar(16) not null,
effect_type_id varchar(16) not null,
effect_detail_id varchar(16) not null,
attack_id varchar(16) not null,
damage_id varchar(16) not null,
resource_type_id varchar(16) not null,
trait_id varchar(16),
constraint pk_effect unique(archetype_id, effect_type_id,
effect_detail_id, effect_apply_id,
attack_id, damage_id, resource_type_id),
constraint fk_effect_archetype_id foreign key(archetype_id)
references archetype (archetype_id)
);
create table ability
(
ability_id integer primary key autoincrement,
ability_description varchar(128)
);
create table ability_effect
(
ability_effect_id integer primary key autoincrement,
ability_id integer not null, -- FK ability
effect_id integer not null, -- FK effect
attack_id varchar(16) not null,
damage_id varchar(16) not null,
resource_type_id varchar(16) not null,
trait_id varchar(16),
constraint fk_ability_effect_ability_id foreign key (ability_id)
references ability (ability_id),
constraint fk_ability_effect_effect_id foreign key (effect_id)
references effect (effect_id)
);

Creating a table specifically for tracking change information to remove duplicated columns from tables

When creating tables, I have generally created them with a couple extra columns that track change times and the corresponding user:
CREATE TABLE dbo.Object
(
ObjectId int NOT NULL IDENTITY (1, 1),
ObjectName varchar(50) NULL ,
CreateTime datetime NOT NULL,
CreateUserId int NOT NULL,
ModifyTime datetime NULL ,
ModifyUserId int NULL
) ON [PRIMARY]
GO
I have a new project now where if I continued with this structure I would have 6 additional columns on each table with this type of change tracking. A time column, user id column and a geography column. I'm now thinking that adding 6 columns to every table I want to do this on doesn't make sense. What I'm wondering is if the following structure would make more sense:
CREATE TABLE dbo.Object
(
ObjectId int NOT NULL IDENTITY (1, 1),
ObjectName varchar(50) NULL ,
CreateChangeId int NOT NULL,
ModifyChangeId int NULL
) ON [PRIMARY]
GO
-- foreign key relationships on CreateChangeId & ModifyChangeId
CREATE TABLE dbo.Change
(
ChangeId int NOT NULL IDENTITY (1, 1),
ChangeTime datetime NOT NULL,
ChangeUserId int NOT NULL,
ChangeCoordinates geography NULL
) ON [PRIMARY]
GO
Can anyone offer some insight into this minor database design problem, such as common practices and functional designs?
Where i work, we use the same construct as yours - every table has the following fields:
CreatedBy (int, not null, FK users table - user id)
CreationDate (datetime, not null)
ChangedBy (int, null, FK users table - user id)
ChangeDate (datetime, null)
Pro: easy to track and maintain; only one I/O operation (i'll come to that later)
Con: i can't think of any at the moment (well ok, sometimes we don't use the change fields ;-)
IMO the approach with the extra table has the problem, that you will have to reference somehow also the belonging table for every record (unless you only need the one direction Object to Tracking table). The approach also leads to more I/O database operations - for every insert or modify you will need to:
add entry to Table Object
add entry to Tracking Table and get the new Id
update Object Table entry with the Tracking Table Id
It would certainly make the application code that communicates with the DB a bit more complicated and error-prone.

Constraints in SQL Database

I need to have a table in T-SQL which will have the following structure
KEY Various_Columns Flag
1 row 1 F
2 row_2 F
3 row_3 T
4 row_4 F
Either no rows, or at most one row can have the Flag column with the value T. My developer claims that this can be achieved with a check constraint placed on the table.
Questions:
Can such a constraint be placed on the database itself (ie an inter-row constraint) at the database level, rather than in business rules for updating or inserting rows
Is such a table in normal form?
Or would normal form require removing the Flag column, and instead (say) had another simple table or variable containing the value of row which had Flag=T, ie in the above case row=3.
1 No. A check constraint is per row. No other constraint will do this either.
You need one of:
a trigger (all versions)
indexed view with filter Flag = T, and unique index on Flag (SQL Server 2000+)
filtered index (SQL Server 2008)
2 Good enough
3 Overkill really. You're splitting the same data up to avoid one the solutions above. But using a one row table, FK for the ID columns, and a unique constraint on Flag
My developer claims that this can be
achieved with a check constraint
placed on the table.
SQL Server does not directly** support subqueries in CHECK constraints (a requirement for Full SQL-92; SQL Server is only compliant with Entry Level SQL-92, broadly speaking).
While there are almost certainly better ways of enforcing this constraint in SQL Server, purely out of interest it can indeed be achieved using a row-level CHECK constraint and a UNIQUE constraint e.g. here's one way:
CREATE TABLE YourStuff
(
key_col INTEGER NOT NULL UNIQUE,
Various_Columns VARCHAR(8) NOT NULL,
Flag CHAR(1) DEFAULT 'F' NOT NULL
CHECK (Flag IN ('F', 'T')),
Flag_key INTEGER UNIQUE,
CHECK (
(Flag = 'F' AND Flag_key = key_col)
OR
(Flag = 'T' AND Flag_key = NULL)
)
);
The issue here is that you will need to maintain the Flag_key column's values 'manually'. Replacing the column + CHECK with a calculated column would mean the values are maintained automatically:
CREATE TABLE YourStuff
(
key_col INTEGER NOT NULL UNIQUE,
Various_Columns VARCHAR(8) NOT NULL,
Flag CHAR(1) DEFAULT 'F' NOT NULL
CHECK (Flag IN ('F', 'T')),
Flag_key AS (
CASE WHEN Flag = 'F' THEN key_col
ELSE NULL END
),
UNIQUE (Flag_key)
);
** While SQL Server does not directly support subqueries in CHECK constraints, there is a workaround in some cases using a user defined function (UDF) e.g.
CREATE FUNCTION dbo.CountTFlags ()
RETURNS INTEGER
AS
BEGIN
DECLARE #return INTEGER;
SET #return = (
SELECT COUNT(*)
FROM YourStuff
WHERE Flag = 'T'
);
RETURN #return;
END;
CREATE TABLE YourStuff
(
key_col INTEGER NOT NULL UNIQUE,
Various_Columns VARCHAR(8) NOT NULL,
Flag CHAR(1) DEFAULT 'F' NOT NULL
CHECK (Flag IN ('F', 'T')),
CHECK (1 >= dbo.CountTFlags())
);
Note that the UDF approach won't work in every case and that caution is required. The important point is that UDF will be evaluated for each row affected (rather than at the SQL statement or transaction level, as you may expect). In this case, the constraint needs to be true for every row affected and therefore -- I think! -- it is safe. For more details, see Trouble with CHECK Constraints by David Portas.
Personally, I would simply use a second table to model Flag, which would only involve keys and a foreign key e.g.
CREATE TABLE YourStuff
(
key_col INTEGER NOT NULL UNIQUE,
Various_Columns VARCHAR(8) NOT NULL
);
CREATE TABLE YourStuffFlag
(
key_col INTEGER NOT NULL UNIQUE
REFERENCES YourStuff (key_col)
);
Is [my] table in normal form?
You should by aiming for Fifth normal form (5NF). Whether you have achieved this depends upon the design of Various_Columns. I do not believe that your Flag falls fowl of the requirements for 5NF and I do not see any update, delete or insert anomalies (which is the point of normalization but a 5NF design can still exhibit anomalies). That said, to switch the row that gets the flag attibute, my two-table design requires a single UPDATE statement while your single-table design requires two ;)

Constraint for only one record marked as default

How could I set a constraint on a table so that only one of the records has its isDefault bit field set to 1?
The constraint is not table scope, but one default per set of rows, specified by a FormID.
Use a unique filtered index
On SQL Server 2008 or higher you can simply use a unique filtered index
CREATE UNIQUE INDEX IX_TableName_FormID_isDefault
ON TableName(FormID)
WHERE isDefault = 1
Where the table is
CREATE TABLE TableName(
FormID INT NOT NULL,
isDefault BIT NOT NULL
)
For example if you try to insert many rows with the same FormID and isDefault set to 1 you will have this error:
Cannot insert duplicate key row in object 'dbo.TableName' with unique
index 'IX_TableName_FormID_isDefault'. The duplicate key value is (1).
Source: http://technet.microsoft.com/en-us/library/cc280372.aspx
Here's a modification of Damien_The_Unbeliever's solution that allows one default per FormID.
CREATE VIEW form_defaults
AS
SELECT FormID
FROM whatever
WHERE isDefault = 1
GO
CREATE UNIQUE CLUSTERED INDEX ix_form_defaults on form_defaults (FormID)
GO
But the serious relational folks will tell you this information should just be in another table.
CREATE TABLE form
FormID int NOT NULL PRIMARY KEY
DefaultWhateverID int FOREIGN KEY REFERENCES Whatever(ID)
From a normalization perspective, this would be an inefficient way of storing a single fact.
I would opt to hold this information at a higher level, by storing (in a different table) a foreign key to the identifier of the row which is considered to be the default.
CREATE TABLE [dbo].[Foo](
[Id] [int] NOT NULL,
CONSTRAINT [PK_Foo] PRIMARY KEY CLUSTERED
(
[Id] ASC
) ON [PRIMARY]
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[DefaultSettings](
[DefaultFoo] [int] NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[DefaultSettings] WITH CHECK ADD CONSTRAINT [FK_DefaultSettings_Foo] FOREIGN KEY([DefaultFoo])
REFERENCES [dbo].[Foo] ([Id])
GO
ALTER TABLE [dbo].[DefaultSettings] CHECK CONSTRAINT [FK_DefaultSettings_Foo]
GO
You could use an insert/update trigger.
Within the trigger after an insert or update, if the count of rows with isDefault = 1 is more than 1, then rollback the transaction.
CREATE VIEW vOnlyOneDefault
AS
SELECT 1 as Lock
FROM <underlying table>
WHERE Default = 1
GO
CREATE UNIQUE CLUSTERED INDEX IX_vOnlyOneDefault on vOnlyOneDefault (Lock)
GO
You'll need to have the right ANSI settings turned on for this.
I don't know about SQLServer.But if it supports Function-Based Indexes like in Oracle, I hope this can be translated, if not, sorry.
You can do an index like this on suposed that default value is 1234, the column is DEFAULT_COLUMN and ID_COLUMN is the primary key:
CREATE
UNIQUE
INDEX only_one_default
ON my_table
( DECODE(DEFAULT_COLUMN, 1234, -1, ID_COLUMN) )
This DDL creates an unique index indexing -1 if the value of DEFAULT_COLUMN is 1234 and ID_COLUMN in any other case. Then, if two columns have DEFAULT_COLUMN value, it raises an exception.
The question implies to me that you have a primary table that has some child records and one of those child records will be the default record. Using address and a separate default table here is an example of how to make that happen using third normal form. Of course I don't know if it's valuable to answer something that is so old but it struck my fancy.
--drop table dev.defaultAddress;
--drop table dev.addresses;
--drop table dev.people;
CREATE TABLE [dev].[people](
[Id] [int] identity primary key,
name char(20)
)
GO
CREATE TABLE [dev].[Addresses](
id int identity primary key,
peopleId int foreign key references dev.people(id),
address varchar(100)
) ON [PRIMARY]
GO
CREATE TABLE [dev].[defaultAddress](
id int identity primary key,
peopleId int foreign key references dev.people(id),
addressesId int foreign key references dev.addresses(id))
go
create unique index defaultAddress on dev.defaultAddress (peopleId)
go
create unique index idx_addr_id_person on dev.addresses(peopleid,id);
go
ALTER TABLE dev.defaultAddress
ADD CONSTRAINT FK_Def_People_Address
FOREIGN KEY(peopleID, addressesID)
REFERENCES dev.Addresses(peopleId, id)
go
insert into dev.people (name)
select 'Bill' union
select 'John' union
select 'Harry'
insert into dev.Addresses (peopleid, address)
select 1, '123 someplace' union
select 1,'work place' union
select 2,'home address' union
select 3,'some address'
insert into dev.defaultaddress (peopleId, addressesid)
select 1,1 union
select 2,3
-- so two home addresses are default now
-- try adding another default address to Bill and you get an error
select * from dev.people
join dev.addresses on people.id = addresses.peopleid
left join dev.defaultAddress on defaultAddress.peopleid = people.id and defaultaddress.addressesid = addresses.id
insert into dev.defaultaddress (peopleId, addressesId)
select 1,2
GO
You could do it through an instead of trigger, or if you want it as a constraint create a constraint that references a function that checks for a row that has the default set to 1
EDIT oops, needs to be <=
Create table mytable(id1 int, defaultX bit not null default(0))
go
create Function dbo.fx_DefaultExists()
returns int as
Begin
Declare #Ret int
Set #ret = 0
Select #ret = count(1) from mytable
Where defaultX = 1
Return #ret
End
GO
Alter table mytable add
CONSTRAINT [CHK_DEFAULT_SET] CHECK
(([dbo].fx_DefaultExists()<=(1)))
GO
Insert into mytable (id1, defaultX) values (1,1)
Insert into mytable (id1, defaultX) values (2,1)
This is a fairly complex process that cannot be handled through a simple constraint.
We do this through a trigger. However before you write the trigger you need to be able to answer several things:
do we want to fail the insert if a default exists, change it to 0 instead of 1 or change the existing default to 0 and leave this one as 1?
what do we want to do if the default record is deleted and other non default records are still there? Do we make one the default, if so how do we determine which one?
You will also need to be very, very careful to make the trigger handle multiple row processing. For instance a client might decide that all of the records of a particular type should be the default. You wouldn't change a million records one at a time, so this trigger needs to be able to handle that. It also needs to handle that without looping or the use of a cursor (you really don't want the type of transaction discussed above to take hours locking up the table the whole time).
You also need a very extensive tesing scenario for this trigger before it goes live. You need to test:
adding a record with no default and it is the first record for that customer
adding a record with a default and it is the first record for that customer
adding a record with no default and it is the not the first record for that customer
adding a record with a default and it is the not the first record for that customer
Updating a record to have the default when no other record has it (assuming you don't require one record to always be set as the deafault)
Updating a record to remove the default
Deleting the record with the deafult
Deleting a record without the default
Performing a mass insert with multiple situations in the data including two records which both have isdefault set to 1 and all of the situations tested when running individual record inserts
Performing a mass update with multiple situations in the data including two records which both have isdefault set to 1 and all of the situations tested when running individual record updates
Performing a mass delete with multiple situations in the data including two records which both have isdefault set to 1 and all of the situations tested when running individual record deletes
#Andy Jones gave an answer above closest to mine, but bearing in mind the Rule of Three, I placed the logic directly in the stored proc that updates this table. This was my simple solution. If I need to update the table from elsewhere, I will move the logic to a trigger. The one default rule applies to each set of records specified by a FormID and a ConfigID:
ALTER proc [dbo].[cpForm_UpdateLinkedReport]
#reportLinkId int,
#defaultYN bit,
#linkName nvarchar(150)
as
if #defaultYN = 1
begin
declare #formId int, #configId int
select #formId = FormID, #configId = ConfigID from csReportLink where ReportLinkID = #reportLinkId
update csReportLink set DefaultYN = 0 where isnull(ConfigID, #configId) = #configId and FormID = #formId
end
update
csReportLink
set
DefaultYN = #defaultYN,
LinkName = #linkName
where
ReportLinkID = #reportLinkId