Working in MS2000, I have a table called JobOwners that maps Jobs (JPSID) to the Employees that own them (EmpID). It also contains the date they started owning that job (DateStarted), date they stopped owning that job (DateEnded) and if the ownership is active (IsActive). Looks like this.
CREATE TABLE JobOwners
(
LogID int NOT NULL IDENTITY(1,1) PRIMARY KEY,
JPSID int NOT NULL FOREIGN KEY REFERENCES JobsPerShift(JPSID),
EmpID int NOT NULL FOREIGN KEY REFERENCES Employees(EmpID),
DateStarted datetime,
DateEnded datetime,
IsActive tinyint NOT NULL
)
There should be no duplicates of JPSID that are active, although inactive duplicates should be fine. With some research I found I could accomplish this using a function on a CHECK constraint.
CREATE FUNCTION CheckActiveCount(#JPSID INT)
RETURNS INT AS
BEGIN
DECLARE #result INT
SELECT #result = COUNT(*) FROM JobOwners WHERE JPSID = #JPSID AND IsActive = 1
RETURN #result
END
GO
ALTER TABLE JobOwners
ADD CONSTRAINT CK_JobOwners_IsActive
CHECK ((IsActive = 1 AND dbo.CheckActiveCount(JPSID) <= 1) OR (IsActive = 0))
This works well enough. It will allow me to insert JPSID 2 with IsActive 1, as there is no other active JPSID 2. It will let me insert JPSID 2 with IsActive 0, because the check isn't applied when IsActive is 0. It rejects when I try to insert JPSID 2 with IsActive 1 again though, because it conflicts with the constraint. See below.
INSERT INTO JobOwners
VALUES(2,2,NULL,NULL,1)
(1 row(s) affected)
INSERT INTO JobOwners
VALUES(2,2,NULL,NULL,0)
(1 row(s) affected)
INSERT INTO JobOwners
VALUES(2,3,NULL,NULL,1)
INSERT statement conflicted with COLUMN FOREIGN KEY constraint...
The problem occurs if I try to update one of the inactive records to active. For some reason, it allows me.
UPDATE JobOwners SET IsActive = 1
WHERE LogID = 3
(1 row(s) affected)
If I run the same statement again, then it conflicts with the constraint, but not the first time. The front end of this app would never change an inactive record to active, it would just insert a new record, but it's still not something I'd like the table to allow.
I'm wondering if it might be best to separate the active job owners and have a seperate table for job owner history, but I'm not certain on the best practice here.
Any help would be greatly appreciated.
Thank you,
Ben
There is a known issue where certain operations will lead to a check constraint that calls a UDF to be bypassed. The bug was listed on Connect (before it was scuttled and all the links were orphaned) and it has been acknowledged, but closed as Won't Fix. This means we need to rely on workarounds.
My first workaround would probably be an instead of update trigger. Thanks to Martin for keeping me honest and for making me test this further - I found that I did not protect against two rows being updated to 1 in the same statement. I've corrected the logic and added a transaction to help prevent a race condition:
CREATE TRIGGER dbo.CheckJobOwners ON dbo.JobOwners
INSTEAD OF UPDATE
AS
BEGIN
SET NOCOUNT ON;
BEGIN TRANSACTION;
UPDATE j SET IsActive = 1 -- /* , other columns */
FROM dbo.JobOwners AS j INNER JOIN inserted AS i
ON i.LogID = j.LogID
WHERE i.IsActive = 1 AND NOT EXISTS
( -- since only one can be active, we don't need an expensive count:
SELECT 1 FROM dbo.JobOwners AS j2
WHERE j2.JPSID = i.JPSID
AND j2.IsActive = 1 AND j2.LogID <> i.LogID
)
AND NOT EXISTS
( -- also need to protect against two rows updated by same statement:
SELECT 1 FROM inserted AS i2
WHERE i2.JPSID = i.JPSID
AND i2.IsActive = 1 AND i2.LogID <> i.LogID
);
-- *if* you want to report errors:
IF (##ROWCOUNT <> (SELECT COUNT(*) FROM inserted WHERE IsActive = 1))
RAISERROR('At least one row was not updated.', 11, 1);
-- assume setting active = 0 always ok & that IsActive is not nullable
UPDATE j SET IsActive = 0 -- /* , other columns */
FROM dbo.JobOwners AS j INNER JOIN inserted AS i
ON j.LogID = i.LogID
WHERE i.IsActive = 0;
COMMIT TRANSACTION;
END
GO
(My only reason for an instead of instead of after trigger is that you only update the rows you need to update, instead of having to rollback after the fact (which won't let you only rollback the invalid updates in the case of a multi-row update)).
There is a lot of good discussion about this issue here:
https://web.archive.org/web/20171013131650/http://sqlblog.com/blogs/tibor_karaszi/archive/2009/12/17/be-careful-with-constraints-calling-udfs.aspx
EDIT: HUGE caveat. See Aaron's comment on this SO question for reasons you probably want to avoid combining UDFs and CHECK CONSTRAINTS. However, since (even after reading and understanding Aaron's concerns) my answer is still viable in our system because of 1) how our system works and 2) we actually want UPDATE statements to fail in the scenarios he describes, I am leaving my answer here. As it ALWAYS is, it is up to you to make sure you understand the ramifications of using the script in this answer. YOU HAVE BEEN WARNED
I followed the link in Aaron's (accepted) answer. In the description there was a specific piece of text that caught my attention "(to check values that are not passing as parameters)".
That gave me an idea. I have a table with columns CustomerId, ContactId, ContactType all of type "int". The PK is CustomerId and ContactId. I needed to be able to limit each CustomerId to only have one "Primary" Contact (ContactType = 1) but as many "secondary" and "other" contacts as people wanted to add. I had setup my UDF to accept only CustomerId as a parameter. So, I added ContactType as well but since I only cared about ContactType = 1, I just hard-coded the ContactType parameter to 1 inside the function. It worked on SQL2012 but I have no idea about other versions.
Here is a test script. I "squished" together some of the statements to reduce the amount of scrolling needed. Note: the constraint ALLOWS zero Primary Contacts because it would be impossible to set a different Contact as the Primary if you did not first remove an existing Primary.
CREATE TABLE [dbo].[CheckConstraintTest](
[CustomerId] [int] NOT NULL,
[ContactId] [int] NOT NULL,
[ContactType] [int] NULL,
CONSTRAINT [PK_CheckConstraintTest] PRIMARY KEY CLUSTERED (
[CustomerId] ASC,
[ContactId] ASC
))
GO
CREATE FUNCTION dbo.OnlyOnePrimaryContact (
#CustId int, #ContactType int ) RETURNS bit
AS BEGIN
DECLARE #result bit, #count int
SET #ContactType = 1 --only care about "1" but needed parm to force SQL to "care" about that column
SELECT #count = COUNT(*) FROM CheckConstraintTest WHERE [CustomerId] = #CustId AND [ContactType] = #ContactType
IF #count < 2 SET #result = 1
ELSE SET #result = 0
RETURN #result
END
GO
ALTER TABLE [dbo].[CheckConstraintTest] WITH CHECK ADD CONSTRAINT [SinglePrimaryContact] CHECK (([dbo].[OnlyOnePrimaryContact]([CustomerId],[ContactType])=(1)))
GO
ALTER TABLE [dbo].[CheckConstraintTest] CHECK CONSTRAINT [SinglePrimaryContact]
GO
INSERT INTO [CheckConstraintTest] (CustomerId, ContactId, ContactType)
VALUES (1,1,1), (1,2,2), (1,3,2), (1,4,2), (2,1,1)
INSERT INTO [CheckConstraintTest] (CustomerId, ContactId, ContactType)
VALUES (1,5,1) --This should fail
UPDATE [CheckConstraintTest] --This should fail
SET ContactType = 1
WHERE CustomerId = 1 AND ContactId = 2
UPDATE [CheckConstraintTest] --This should work
SET ContactType = 2
WHERE CustomerId = 1 AND ContactId = 1
INSERT INTO [CheckConstraintTest] (CustomerId, ContactId, ContactType)
VALUES (1,5,1) --This should work now since we change Cust 1, Contact 1, to "secondary" in previous statement
Related
An example to the problem:
There are 3 columns present in my SQL database.
+-------------+------------------+-------------------+
| id(integer) | age(varchar(20)) | name(varchar(20)) |
+-------------+------------------+-------------------+
There are a 100 rows of different ids, ages and names. However, since many people update the database, age and name constantly change.
However, there are some boundaries to age and name:
Age has to be an integer and has to be greater than 0.
Name has to be alphabets and not numbers.
The problem is a script to check if the change of values is within the boundaries. For example, if age = -1 or Name = 1 , these values are out of the boundaries.
Right now, there is a script that does insert * into newtable where age < 0 and isnumeric(age) = 0 or isnumeric(name) = 0;
The compiled new table has rows of data that have values that are out of the boundary.
I was wondering if there is a more efficient method to do such checking in SQL. Also, i'm using microsoft sql server, so i was wondering if it is more efficient to use other languages such as C# or python to solve this issue.
You can apply check constraint. Replace 'myTable' with your table name. 'AgeCheck' and 'NameCheck' are names of the constraints. And AGE is the name of your AGE column.
ALTER TABLE myTable
ADD CONSTRAINT AgeCheck CHECK(AGE > 0 )
ALTER TABLE myTable
ADD CONSTRAINT NameCheck CHECK ([Name] NOT LIKE '%[^A-Z]%')
See more on Create Check Constraints
If you want to automatically insert the invalid data into a new table, you can create AFTER INSERT Trigger. I have given snippet for your reference. You can expand the same with additional logic for name check.
Generally, triggers are discouraged, as they make the transaction lengthier. If you want to avoid the trigger, you can have a sql agent job to do auditing on regular basis.
CREATE TRIGGER AfterINSERTTrigger on [Employee]
FOR INSERT
AS
BEGIN
DECLARE #Age TINYINT, #Id INT, Name VARCHAR(20);
SELECT #Id = ins.Id FROM INSERTED ins;
SELECT #Age = ins.Age FROM INSERTED ins;
SELECT #Name = ins.Name FROM INSERTED ins;
IF (#Age = 0)
BEGIN
INSERT INTO [EmployeeAudit](
[ID]
,[Name]
,[Age])
VALUES (#ID,
#Name,
#Age);
END
END
GO
I use SQL Server 2016. I have a database table called "Member".
In that table, I have these 3 columns (for the purpose of my question):
idMember [INT - Identity - Primary Key]
memEmail
memEmailPartner
I want to prevent a row to use an email that already exists in the table.
Both email columns are not mandatory, so they can be left blank (NULL).
If I create a new Member:
If not blank, the values entered for "memEmail" and "memEmailPartner" (independently) should not be found in any other rows in columns memEmail nor memEmailPartner.
So if I want to create a row with email (dominic#email.com) I must not find any occurrences of that value in memEmail or memEmailPartner.
If I update an existing Member:
I must not find any occurrences of that value in memEmail or memEmailPartner, with the exception that I am updating the row (idMembre) which already have the value in memEmail or memEmailPartner.
--
From what I read on Google, it should be possible to do something with a Function-Based Check Constraint but I can't make that work.
Anyone have a solution to my problem ?
Thank you.
I may have misunderstood exactly what you were asking but it looks like you want a simple upsert query with IF EXISTS conditions.
DECLARE #emailAddress VARCHAR(255)= 'dominic#email.com', --dummy value
#id INT= 2; --dummy value
IF NOT EXISTS
(
SELECT 1
FROM #Member
WHERE memEmail = #emailAddress
OR memEmailPartner = #emailAddress
)
BEGIN
SELECT 'insert';
END;
ELSE IF EXISTS
(
SELECT 1
FROM #Member
WHERE idMember = #id
)
BEGIN
SELECT 'update';
END;
A trigger is the traditional way of doing doing what you're asking for. Here's a simple demo;
--if object_id('member') is not null drop table member
go
create table member (
idMember INT Identity Primary Key,
memEmail varchar(100),
memEmailPartner varchar(100)
)
go
create trigger trg_member on member after insert, update as
begin
set nocount on
if exists (select 1 from member m join inserted i on i.memEmail = m.memEmail and i.idMember <> m.idMember) or
exists (select 1 from member m join inserted i on i.memEmail = m.memEmailPartner and i.idMember <> m.idMember) or
exists (select 1 from member m join inserted i on i.memEmailPartner = m.memEmail and i.idMember <> m.idMember) or
exists (select 1 from member m join inserted i on i.memEmailPartner = m.memEmailPartner and i.idMember <> m.idMember)
begin
raiserror('Email addresses must be unique.', 16, 1)
rollback
end
end
go
insert member(memEmail, memEmailPartner) values('a#a.com', null), ('b#b.com', null), (null, 'c#c.com'), (null, 'd#d.com')
go
select * from member
insert member(memEmail, memEmailPartner) values('a#a.com', null) -- should fail
go
insert member(memEmail, memEmailPartner) values(null, 'a#a.com') -- should fail
go
insert member(memEmail, memEmailPartner) values('c#c.com', null) -- should fail
go
insert member(memEmail, memEmailPartner) values(null, 'c#c.com') -- should fail
go
insert member(memEmail, memEmailPartner) values('e#e.com', null) -- should work
go
insert member(memEmail, memEmailPartner) values(null, 'f#f.com') -- should work
go
select * from member
-- Make sure updates still work!
update member set memEmail = memEmail, memEmailPartner = memEmailPartner
I've not tested this extensively but it should be enough to get you started if you want to try this approach.
StuartLC notes the potential for the UDF check constraint to fail in set based updates and/or various other conditions, triggers don't have this problem.
Stuart also suggests reconsidering whether this should really be a database constraint or managed through business logic elsewhere. I'm inclined to agree - my gut feel here is that sooner or later you will come across a situation that requires email addresses to be reused, or in some other way not strictly unique.
TL;DR
The wisdom of applying this kind of business rule logic in the database needs to be reconsidered - this check is likely a better candidate for your application, or a stored procedure which acts as an insert gate keeper instead of direct new row inserts into the table.
Ignoring the Warnings
That said, I do believe that what you want is however possible in a constraint UDF, albeit with potentially atrocious performance consequences*1, and likely prone to race conditions in set based updates
Here's a user defined function which applies the unique email logic across both columns. Note that by the time the constraint is checked, that the row is IN the table already, hence the new row itself needs to be excluded from the duplicate checks.
My code also is depedent on ANSI NULL behaviour, i.e. that the predicates NULL = NULL and X IN (NULL) both return NULL, and hence are excluded from the failure check (in order to meet your requirement that NULLS do not fail the rule).
We also need to check for the insert of BOTH new columns being non-null, but duplicated.
So here's the a UDF doing the checking:
CREATE FUNCTION dbo.CheckUniqueEmails(#id int, #memEmail varchar(50),
#memEmailPartner varchar(50))
RETURNS bit
AS
BEGIN
DECLARE #retval bit;
IF #memEmail = #memEmailPartner
OR EXISTS (SELECT 1 FROM MyTable WHERE memEmail IS NOT NULL
AND memEmail IN(#memEmail, #memEmailPartner) AND idMember <> #id)
OR EXISTS (SELECT 1 FROM MyTable WHERE memEmailPartner IS NOT NULL
AND memEmailPartner IN(#memEmail, #memEmailPartner) AND idMember <> #id)
SET #retval = 0
ELSE
SET #retval = 1;
RETURN #retval;
END;
GO
Which is then enforced in a CHECK constraint:
ALTER TABLE MyTable ADD CHECK (dbo.CheckUniqueEmails(
idMember, memEmail, memEmailPartner) = 1);
I've put a SQLFiddle up here
Uncomment the 'failed' test cases to ensure that the above check constraint is working.
I haven't tested this with updates, and as per Martin's advice on the link, this will likely break on an insert with multiple rows.
*1 - we'll need indexes on BOTH email address columns.
I have a table called Employee. The EmpId column serves as the primary key. In my scenario, I cannot make it AutoNumber.
What would be the best way of generating the the next EmpId for the new row that I want to insert in the table?
I am using SQL Server 2008 with C#.
Here is the code that i am currently getting, but to enter Id's in key value pair tables or link tables (m*n relations)
Create PROCEDURE [dbo].[mSP_GetNEXTID]
#NEXTID int out,
#TABLENAME varchar(100),
#UPDATE CHAR(1) = NULL
AS
BEGIN
DECLARE #QUERY VARCHAR(500)
BEGIN
IF EXISTS (SELECT LASTID FROM LASTIDS WHERE TABLENAME = #TABLENAME and active=1)
BEGIN
SELECT #NEXTID = LASTID FROM LASTIDS WHERE TABLENAME = #TABLENAME and active=1
IF(#UPDATE IS NULL OR #UPDATE = '')
BEGIN
UPDATE LASTIDS
SET LASTID = LASTID + 1
WHERE TABLENAME = #TABLENAME
and active=1
END
END
ELSE
BEGIN
SET #NEXTID = 1
INSERT INTO LASTIDS(LASTID,TABLENAME, ACTIVE)
VALUES(#NEXTID+1,#TABLENAME, 1)
END
END
END
Using MAX(id) + 1 is a bad idea both performance and concurrency wise.
Instead you should resort to sequences which were design specifically for this kind of problem.
CREATE SEQUENCE EmpIdSeq AS bigint
START WITH 1
INCREMENT BY 1;
And to generate the next id use:
SELECT NEXT VALUE FOR EmpIdSeq;
You can use the generated value in a insert statement:
INSERT Emp (EmpId, X, Y)
VALUES (NEXT VALUE FOR EmpIdSeq, 'x', 'y');
And even use it as default for your column:
CREATE TABLE Emp
(
EmpId bigint PRIMARY KEY CLUSTERED
DEFAULT (NEXT VALUE FOR EmpIdSeq),
X nvarchar(255) NULL,
Y nvarchar(255) NULL
);
Update: The above solution is only applicable to SQL Server 2012+. For older versions you can simulate the sequence behavior using dummy tables with identity fields:
CREATE TABLE EmpIdSeq (
SeqID bigint IDENTITY PRIMARY KEY CLUSTERED
);
And procedures that emulates NEXT VALUE:
CREATE PROCEDURE GetNewSeqVal_Emp
#NewSeqVal bigint OUTPUT
AS
BEGIN
SET NOCOUNT ON
INSERT EmpIdSeq DEFAULT VALUES
SET #NewSeqVal = scope_identity()
DELETE FROM EmpIdSeq WITH (READPAST)
END;
Usage exemple:
DECLARE #NewSeqVal bigint
EXEC GetNewSeqVal_Emp #NewSeqVal OUTPUT
The performance overhead of deleting the last inserted element will be minimal; still, as pointed out by the original author, you can optionally remove the delete statement and schedule a maintenance job to delete the table contents off-hour (trading space for performance).
Adapted from SQL Server Customer Advisory Team Blog.
Working SQL Fiddle
The above
select max(empid) + 1 from employee
is the way to get the next number, but if there are multiple user inserting into the database, then context switching might cause two users to get the same value for empid and then add 1 to each and then end up with repeat ids. If you do have multiple users, you may have to lock the table while inserting. This is not the best practice and that is why the auto increment exists for database tables.
I hope this works for you. Considering that your ID field is an integer
INSERT INTO Table WITH (TABLOCK)
(SELECT CASE WHEN MAX(ID) IS NULL
THEN 1 ELSE MAX(ID)+1 END FROM Table), VALUE_1, VALUE_2....
Try following query
INSERT INTO Table VALUES
((SELECT isnull(MAX(ID),0)+1 FROM Table), VALUE_1, VALUE_2....)
you have to check isnull in on max values otherwise it will return null in final result when table contain no rows .
I am trying to create a simple to insert trigger that gets the count from a table and adds it to another like this
CREATE TABLE [poll-count](
id VARCHAR(100),
altid BIGINT,
option_order BIGINT,
uip VARCHAR(50),
[uid] VARCHAR(100),
[order] BIGINT
PRIMARY KEY NONCLUSTERED([order]),
FOREIGN KEY ([order]) references ord ([order]
)
GO
CREATE TRIGGER [get-poll-count]
ON [poll-count]
FOR INSERT
AS
BEGIN
DECLARE #count INT
SET #count = (SELECT COUNT (*) FROM [poll-count] WHERE option_order = i.option_order)
UPDATE [poll-options] SET [total] = #count WHERE [order] = i.option_order
END
GO
when i ever i try to run this i get this error:
The multi-part identifier "i.option_order" could not be bound
what is the problem?
thanks
Your trigger currently assumes that there will always be one-row inserts. Have you tried your trigger with anything like this?
INSERT dbo.[poll-options](option_order --, ...)
VALUES(1 --, ...),
(2 --, ...);
Also, you say that SQL Server "cannot access inserted table" - yet your statement says this. Where do you reference inserted (even if this were a valid subquery structure)?
SET #count = (SELECT COUNT (*) FROM [poll-count]
WHERE option_order = i.option_order)
-----------------------^ "i" <> "inserted"
Here is a trigger that properly references inserted and also properly handles multi-row inserts:
CREATE TRIGGER dbo.pollupdate
ON dbo.[poll-options]
FOR INSERT
AS
BEGIN
SET NOCOUNT ON;
;WITH x AS
(
SELECT option_order, c = COUNT(*)
FROM dbo.[poll-options] AS p
WHERE EXISTS
(
SELECT 1 FROM inserted
WHERE option_order = p.option_order
)
GROUP BY option_order
)
UPDATE p SET total = x.c
FROM dbo.[poll-options] AS p
INNER JOIN x
ON p.option_order = x.option_order;
END
GO
However, why do you want to store this data on every row? You can always derive the count at runtime, know that it is perfectly up to date, and avoid the need for a trigger altogether. If it's about the performance aspect of deriving the count at runtime, a much easier way to implement this write-ahead optimization for about the same maintenance cost during DML is to create an indexed view:
CREATE VIEW dbo.[poll-options-count]
WITH SCHEMABINDING
AS
SELECT option_order, c = COUNT_BIG(*)
FROM dbo.[poll-options]
GROUP BY option_order;
GO
CREATE UNIQUE CLUSTERED INDEX oo ON dbo.[poll-options-count](option_order);
GO
Now the index is maintained for you and you can derive very quick counts for any given (or all) option_order values. You'll have test, of course, whether the improvement in query time is worth the increased maintenance (though you are already paying that price with the trigger, except that it can affect many more rows in any given insert, so...).
As a final suggestion, don't use special characters like - in object names. It just forces you to always wrap it in [square brackets] and that's no fun for anyone.
I have a situation where a table has three columns ID, Value and status. For a distinct ID there should be only one status with value 1 and it should be allowed for ID to have more then one status with value 0. Unique key would prevent ID of having more then one status (0 or 1).
Is there a way to solve this, maybe using constraints?
Thanks
You can create an indexed view that will uphold your constraint of keeping ID unique for [Status] = 1.
create view dbo.v_YourTable with schemabinding as
select ID
from dbo.YourTable
where [Status] = 1
go
create unique clustered index UX_v_UniTest_ID on v_YourTable(ID)
In SQL Server 2008 you could use a unique filtered index instead.
If the table can have duplicate ID values, then a check constraint wouldn't work for your situation. I think the only way would be to use a trigger. If you are looking for an example then I can post one. But in summary, use a trigger to test if the inserted/updated ID has a status of 1 that is duplicated across the same ID.
EDIT: You could always use a unique constraint on ID and Value. I'm thinking that will give you what you are looking for.
You could put this into an insert/ update trigger to check to make sure only one combination exists with the 1 value; if your condition is not met, you could throw a trappable error and force the operation to roll back.
If you can use NULL instead of 0 for a zero-status, then you can use a UNIQUE constraint on the pair and it should work. Since NULL is not an actual value (NULL != NULL), then rows with multiple nulls should not conflict.
IMHO, this basically is a normalisation problem. The column named "id" does not uniquely address a row, so it can never be a PK. At least a new (surrogate) key(element) is needed. The constraint itself cannot be expressed as an expression "within the row", so it has to be expressed in terms of a FK.
So it breaks down into two tables:
One with PK=id, and a FK REFERENCING two.sid
Two with PK= surrogate key, and FK id REFERENCING one.id
The original payload "value" also lives here.
The "one bit variable" disappears, because it can be expressed in terms of EXISTS. (effectively table one points to the row that holds the token)
[I expect the Postgres rule system could be used to use the above two-tables-model to emulate the intended behaviour of the OP. But that would be an ugly hack...]
EDIT/UPDATE:
Postgres supports partial/conditional indices. (don't know about ms-sql)
DROP TABLE tmp.one;
CREATE TABLE tmp.one
( sid INTEGER NOT NULL PRIMARY KEY -- surrogate key
, id INTEGER NOT NULL
, status INTEGER NOT NULL DEFAULT '0'
/* ... payload */
);
INSERT INTO tmp.one(sid,id,status) VALUES
(1,1,0) , (2,1,1) , (3,1,0)
, (4,2,0) , (5,2,0) , (6,2,1)
, (7,3,0) , (8,3,0) , (9,3,1)
;
CREATE UNIQUE INDEX only_one_non_zero ON tmp.one (id)
WHERE status > 0 -- "partial index"
;
\echo this should succeed
BEGIN ;
UPDATE tmp.one SET status = 0 WHERE sid=2;
UPDATE tmp.one SET status = 1 WHERE sid=1;
COMMIT;
\echo this should fail
BEGIN ;
UPDATE tmp.one SET status = 1 WHERE sid=4;
UPDATE tmp.one SET status = 0 WHERE sid=9;
COMMIT;
SELECT * FROM tmp.one ORDER BY sid;
I came up with a solution
First create a function
CREATE FUNCTION [dbo].[Check_Status] (#ID int)
RETURNS INT
AS
BEGIN
DECLARE #r INT;
SET #r =
(SELECT SUM(status) FROM dbo.table where ID= #ID);
RETURN #r;
END
Second create a constraint in table
([dbo].[Check_Status]([ID])<(2))
In this way one ID could have single status (1) and as many as possible status (0).
create function dbo.IsValueUnique
(
#proposedValue varchar(50)
,#currentId int
)
RETURNS bit
AS
/*
--EXAMPLE
print dbo.IsValueUnique() -- fail
print dbo.IsValueUnique(null) -- fail
print dbo.IsValueUnique(null,1) -- pass
print dbo.IsValueUnique('Friendly',1) -- pass
*/
BEGIN
DECLARE #count bit
set #count =
(
select count(1)
from dbo.MyTable
where #proposedValue is not null
and dbo.MyTable.MyPkColumn != #currentId
and dbo.MyTable.MyColumn = #proposedValue
)
RETURN case when #count = 0 then 1 else 0 end
END
GO
ALTER TABLE MyTable
WITH CHECK
add constraint CK_ColumnValueIsNullOrUnique
CHECK ( 1 = dbo.IsValueNullOrUnique([MyColumn],[MyPkColumn]) )
GO