What options are available for applying a set level constraint in PostgreSQL? - sql

I have a situation where I need to ensure that there is only one active record with the same object_id and user_id at any time. Here is a representative table:
CREATE TABLE actions (
id SERIAL PRIMARY KEY,
object_id integer,
user_id integer,
active boolean default true,
created_at timestamptz default now()
);
By only one active record at a time, I mean you could have a sequence of inserts like the following:
insert into actions (object_id, user_id, active) values (1, 1, true);
insert into actions (object_id, user_id, active) values (1, 1, false);
but doing a subsequent
insert into actions (object_id, user_id, active) values (1, 1, true);
should fail because at this point in time, there already exists 1 active tuple with object_id = 1 and user_id = 1.
I'm using PostgreSQL 8.4.
I saw this post which looks interesting, but its Oracle specific.
I also saw this post but it requires more care regarding the transaction isolation level. I don't think it would work as-is in read committed mode.
My question is what other options are available to unsure this kind of constraint?
Edit: Removed the third insert in the first set. I think it was confusing the example. I also added the created_at time stamp to help with the context. To reiterate, there can be multiple (object_id, user_id, false) tuples, but only one (object_id, user_id, true) tuple.
Update: I accepted Craig's answer, but for others who may stumble upon something similar, here is another possible (though suboptimal) solution.
CREATE TABLE action_consistency (
object_id integer,
user_id integer,
count integer default 0,
primary key (object_id, user_id),
check (count >= 0 AND count <= 1)
);
CREATE OR REPLACE FUNCTION keep_action_consistency()
RETURNS TRIGGER AS
$BODY$
BEGIN
IF NEW.active THEN
UPDATE action_consistency
SET count = count + 1
WHERE object_id = NEW.object_id AND
user_id = NEW.user_id;
INSERT INTO action_consistency (object_id, user_id, count)
SELECT NEW.object_id, NEW.user_id, 1
WHERE NOT EXISTS (SELECT 1
FROM action_consistency
WHERE object_id = NEW.object_id AND
user_id = NEW.user_id);
ELSE
-- assuming insert will be active for simplicity
UPDATE action_consistency
SET count = count - 1
WHERE object_id = NEW.object_id AND
user_id = NEW.user_id;
END IF;
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql;
CREATE TRIGGER ensure_action_consistency AFTER INSERT OR UPDATE ON actions
FOR EACH ROW EXECUTE PROCEDURE keep_action_consistency();
It requires the use of a tracking table. For what I hope are obvious reasons, this is not at all desirable. It means that you have an additional row each distinct (object_id, user_id) in actions.
Another reason why I accepted #Craig Ringer's answer is that there are foreign key references to actions.id in other tables that are also rendered inactive when a given action tuple changes state. This why the history table is less ideal in this scenario. Thank you for the comments and answers.

Given your specification that you want to limit only one entry to being active at a time, try:
CREATE TABLE actions (
id SERIAL PRIMARY KEY,
object_id integer,
user_id integer,
active boolean default true,
created_at timestamptz default now()
);
CREATE UNIQUE INDEX actions_unique_active_y ON actions(object_id,user_id) WHERE (active = 't');
This is a partial unique index, a PostgreSQL specific feature - see partial indexes. It constrains the set such that only one (object_id,user_id) tuple may exist where active is true.
While that strictly answers your question as you explained further in comments, I think wildplasser's answer describes the more correct choice and best approach.

You can use UNIQUE constraint to ensure that the column contains the unique value...
Here, set of object_id and user_id have been made unique....
CREATE TABLE actions (
id SERIAL PRIMARY KEY,
object_id integer,
user_id integer,
active boolean default true,
UNIQUE (object_id , user_id )
);
Check Out SQLFIDDLE
Similary, if you want to make set of object_id,user_id and active as UNIQUE, you can simply add the column name in the list of UNIQUE.
CREATE TABLE actions (
id SERIAL PRIMARY KEY,
object_id integer,
user_id integer,
active boolean default true,
UNIQUE (object_id , user_id,active )
);
Check Out SQLFIDDLE

Original:
CREATE TABLE actions (
id SERIAL PRIMARY KEY,
object_id integer,
user_id integer,
active boolean default true
);
my version:
CREATE TABLE actions (
object_id integer NOT NULL REFERENCES objects (id),
user_id integer NOT NULL REFERENCES users(id),
PRIMARY KEY (user_id, object_id)
);
What are the differences:
omitted the surrogate key. It is useless, it enforces no constraint, and nobody will ever reference it
added a (composite) primary key, which happens to be the logical key
changed the two fields to NOT NULL, and made them into foreign keys (what would be the meaning of a row that would not exist in the users or objects table?
removed the boolean flag. What is the semantic difference between a {user_id,object_id} tuple that does not exist versus one that does exist but has it's "active" flag set to false? Why create three states when you only need two?

Related

Check constraint to prevent 2 or more rows from having numeric value of 1

I have a SQL table with a column called [applied], only one row from all rows can be applied ( have the value of 1) all other rows should have the value 0
Is there a check constraint that i can write to force such a case?
If you use null instead of 0, it will be much easier.
Have a CHECK constraint to make sure the (non-null) value = 1. Also have a UNIQUE constraint to only allow a single value 1.
create table testtable (
id int primary key,
applied int,
constraint applied_unique unique (applied),
constraint applied_eq_1 check (applied = 1)
);
Core ANSI SQL, i.e. expected to work with any database.
Most databases support filtered indexes:
create unique index unq_t_applied on t(applied) where applied = 1;
To know exactly how to write trigger that will help you an info of a database you use is needed.
You wil need a trigger where this will be your test control:
SELECT COUNT(APPLIED)
FROM TEST
WHERE APPLIED = 1
If it is > 0 then do not allow insert else allow.
While this can be done with triggers and constraints, they probably require an index. Instead, consider a join table.
create table things_applied (
id smallint primary key default 1,
thing_id bigint references things(id) not null,
check(id = 1)
);
Because the primary key is unique, there can only ever be one row.
The first is activated with an insert.
insert into things_applied (thing_id) values (1);
Change it by updating the row.
update things_applied set thing_id = 2;
To deactivate completely, delete the row.
delete things_applied;
To find the active row, join with the table.
select t.*
from things t
join things_applied ta on ta.thing_id = t.id
To check if it's active at all, count the rows.
select count(id) as active
from things_applied
Try it.

How to insert data from one table into another as PostgreSQL array?

I have the following tables:
CREATE TABLE "User" (
id integer DEFAULT nextval('"User_id_seq"'::regclass) PRIMARY KEY,
name text NOT NULL DEFAULT ''::text,
coinflips boolean[]
);
CREATE TABLE "User_coinflips_COPY" (
"nodeId" integer,
position integer,
value boolean,
id integer DEFAULT nextval('"User_coinflips_COPY_id_seq"'::regclass) PRIMARY KEY
);
I'm no looking for the SQL statement that grabs the value entry from each row in User_coinflips and inserts it as an array into the coinflips column on User.
Any help would be appreciated!
Update
Not sure if it's important but I just realized a minor mistake in my table definitions above, I replace User_coinflips with User_coinflips_COPY since that accurately describes my schema. Just for context, before it looked like this:
CREATE TABLE "User_coinflips" (
"nodeId" integer REFERENCES "User"(id) ON DELETE CASCADE,
position integer,
value boolean NOT NULL,
CONSTRAINT "User_coinflips_pkey" PRIMARY KEY ("nodeId", position)
);
You are looking for an UPDATE, rather then insert.
Use a derived table with the aggregated values to join against in the UPDATE statement:
update "User"
set conflips = t.flips
from (
select "nodeId", array_agg(value order by position) as flips
from "User_coinflips"
group by "nodeId"
) t
where t."nodeId" = "User"."nodeId";

SQL - How do you use a user defined function to constrain a value between 2 tables

First here's the relevant code:
create table customer(
customer_mail_address varchar(255) not null,
subscription_start date not null,
subscription_end date, check (subscription_end !< subcription start)
constraint pk_customer primary key (customer_mail_address)
)
create table watchhistory(
customer_mail_address varchar(255) not null,
watch_date date not null,
constraint pk_watchhistory primary key (movie_id, customer_mail_address, watch_date)
)
alter table watchhistory
add constraint fk_watchhistory_ref_customer foreign key (customer_mail_address)
references customer (customer_mail_address)
on update cascade
on delete no action
go
So i want to use a UDF to constrain the watch_date in watchhistory between the subscription_start and subscription_end in customer. I can't seem to figure it out.
Check constraints can't validate data against other tables, the docs say (emphasis mine):
[ CONSTRAINT constraint_name ]
{
...
CHECK [ NOT FOR REPLICATION ] ( logical_expression )
}
logical_expression
Is a logical expression used in a CHECK constraint and returns TRUE or
FALSE. logical_expression used with CHECK constraints cannot
reference another table but can reference other columns in the same
table for the same row. The expression cannot reference an alias data
type.
That being said, you can create a scalar function that validates your date, and use the scalar function on the check condition instead:
CREATE FUNCTION dbo.ufnValidateWatchDate (
#WatchDate DATE,
#CustomerMailAddress VARCHAR(255))
RETURNS BIT
AS
BEGIN
IF EXISTS (
SELECT
'supplied watch date is between subscription start and end'
FROM
customer AS C
WHERE
C.customer_mail_address = #CustomerMailAddress AND
#WatchDate BETWEEN C.subscription_start AND C.subscription_end)
BEGIN
RETURN 1
END
RETURN 0
END
Now add your check constraint so it validates that the result of the function is 1:
ALTER TABLE watchhistory
ADD CONSTRAINT CHK_watchhistory_ValidWatchDate
CHECK (dbo.ufnValidateWatchDate(watch_date, customer_mail_address) = 1)
This is not a direct link to the other table, but a workaround you can do to validate the date. Keep in mind that if you update the customer dates after the watchdate insert, dates will be inconsistent. The only way to ensure full consistency in this case would be with a few triggers.

Primary key consists of a foreign key and a identity and should reset identity under a condition

create table Linq_TB
{
url_id int NOTNULL,
Pg_Name nvarchar(50) NOTNULL,
URL nvarchar(50) NUTNULL,
CONSTRAINT Linq_id PRIMARY KEY (url_id,DBCC Checkident(Linq_TB,RESEED,0) case url_id not in(select URL_Id from URL_TB ))
}
I want to make a table which it's primary key is Linq_id and gets it's value from both the url_id and identity with start from 1 and increments 1 by 1. url_id is a foreign key. For example if url_id is 1, linq_id's will be 11, 12, 13,... and I also want to reset linq_id identity when the url_id changes.
What should the query be? The query above doesn't work, why?
Thanks in advance
Well, a constraint contains conditions and not code to be executed. You should consider using a stored procedure for your task and also a homegrown method of assigning IDs.
However, it is not a common practice to have your primary keys 'pretty' or formatted, as there is no real benefit (except maybe for debugging purposes maybe).
I do not recommend executing DBCC whenever your url_ID changes. This has a great negative impact on performance.
Why don't you leave the IDs like they are?
You can do this with the following table and trigger definitions:
CREATE TABLE Linq_TB
(
url_id INT NOT NULL,
Linq_id INT NOT NULL,
Pg_Name NVARCHAR(50) NOT NULL,
URL NVARCHAR(50) NOT NULL,
CONSTRAINT PK_Link_TB PRIMARY KEY (url_id, Linq_id),
CONSTRAINT FK_URL_TB_URL_ID FOREIGN KEY (url_id) REFERENCES URL_TB (url_id)
)
GO
CREATE TRIGGER tr_Linq_TB_InsertUpdate
ON Linq_TB
INSTEAD OF INSERT
AS
INSERT INTO Linq_TB
SELECT i.url_id,
ISNULL(tb.Linq_id, 0)
+ row_number() over (partition by i.url_id order by (select 1)),
i.Pg_Name, i.URL
FROM inserted i
LEFT OUTER JOIN
(
SELECT url_id, MAX(Linq_ID) Linq_id
FROM Linq_TB
GROUP BY url_id
) tb ON i.url_id = tb.url_id
GO
The CREATE TABLE defines your columns and constraints. And, the trigger creates the logic to generate a sequence value in your Linq_id column for each url_id.
Note that the logic in the trigger is not complete. A couple of issues are not addressed: 1) If the url_id changes for a row, the trigger doesn't update the Link_id, and 2) deleting rows will lead to gaps in the Linq_TB column sequence.

Constraint for only one record marked as default

How could I set a constraint on a table so that only one of the records has its isDefault bit field set to 1?
The constraint is not table scope, but one default per set of rows, specified by a FormID.
Use a unique filtered index
On SQL Server 2008 or higher you can simply use a unique filtered index
CREATE UNIQUE INDEX IX_TableName_FormID_isDefault
ON TableName(FormID)
WHERE isDefault = 1
Where the table is
CREATE TABLE TableName(
FormID INT NOT NULL,
isDefault BIT NOT NULL
)
For example if you try to insert many rows with the same FormID and isDefault set to 1 you will have this error:
Cannot insert duplicate key row in object 'dbo.TableName' with unique
index 'IX_TableName_FormID_isDefault'. The duplicate key value is (1).
Source: http://technet.microsoft.com/en-us/library/cc280372.aspx
Here's a modification of Damien_The_Unbeliever's solution that allows one default per FormID.
CREATE VIEW form_defaults
AS
SELECT FormID
FROM whatever
WHERE isDefault = 1
GO
CREATE UNIQUE CLUSTERED INDEX ix_form_defaults on form_defaults (FormID)
GO
But the serious relational folks will tell you this information should just be in another table.
CREATE TABLE form
FormID int NOT NULL PRIMARY KEY
DefaultWhateverID int FOREIGN KEY REFERENCES Whatever(ID)
From a normalization perspective, this would be an inefficient way of storing a single fact.
I would opt to hold this information at a higher level, by storing (in a different table) a foreign key to the identifier of the row which is considered to be the default.
CREATE TABLE [dbo].[Foo](
[Id] [int] NOT NULL,
CONSTRAINT [PK_Foo] PRIMARY KEY CLUSTERED
(
[Id] ASC
) ON [PRIMARY]
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[DefaultSettings](
[DefaultFoo] [int] NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[DefaultSettings] WITH CHECK ADD CONSTRAINT [FK_DefaultSettings_Foo] FOREIGN KEY([DefaultFoo])
REFERENCES [dbo].[Foo] ([Id])
GO
ALTER TABLE [dbo].[DefaultSettings] CHECK CONSTRAINT [FK_DefaultSettings_Foo]
GO
You could use an insert/update trigger.
Within the trigger after an insert or update, if the count of rows with isDefault = 1 is more than 1, then rollback the transaction.
CREATE VIEW vOnlyOneDefault
AS
SELECT 1 as Lock
FROM <underlying table>
WHERE Default = 1
GO
CREATE UNIQUE CLUSTERED INDEX IX_vOnlyOneDefault on vOnlyOneDefault (Lock)
GO
You'll need to have the right ANSI settings turned on for this.
I don't know about SQLServer.But if it supports Function-Based Indexes like in Oracle, I hope this can be translated, if not, sorry.
You can do an index like this on suposed that default value is 1234, the column is DEFAULT_COLUMN and ID_COLUMN is the primary key:
CREATE
UNIQUE
INDEX only_one_default
ON my_table
( DECODE(DEFAULT_COLUMN, 1234, -1, ID_COLUMN) )
This DDL creates an unique index indexing -1 if the value of DEFAULT_COLUMN is 1234 and ID_COLUMN in any other case. Then, if two columns have DEFAULT_COLUMN value, it raises an exception.
The question implies to me that you have a primary table that has some child records and one of those child records will be the default record. Using address and a separate default table here is an example of how to make that happen using third normal form. Of course I don't know if it's valuable to answer something that is so old but it struck my fancy.
--drop table dev.defaultAddress;
--drop table dev.addresses;
--drop table dev.people;
CREATE TABLE [dev].[people](
[Id] [int] identity primary key,
name char(20)
)
GO
CREATE TABLE [dev].[Addresses](
id int identity primary key,
peopleId int foreign key references dev.people(id),
address varchar(100)
) ON [PRIMARY]
GO
CREATE TABLE [dev].[defaultAddress](
id int identity primary key,
peopleId int foreign key references dev.people(id),
addressesId int foreign key references dev.addresses(id))
go
create unique index defaultAddress on dev.defaultAddress (peopleId)
go
create unique index idx_addr_id_person on dev.addresses(peopleid,id);
go
ALTER TABLE dev.defaultAddress
ADD CONSTRAINT FK_Def_People_Address
FOREIGN KEY(peopleID, addressesID)
REFERENCES dev.Addresses(peopleId, id)
go
insert into dev.people (name)
select 'Bill' union
select 'John' union
select 'Harry'
insert into dev.Addresses (peopleid, address)
select 1, '123 someplace' union
select 1,'work place' union
select 2,'home address' union
select 3,'some address'
insert into dev.defaultaddress (peopleId, addressesid)
select 1,1 union
select 2,3
-- so two home addresses are default now
-- try adding another default address to Bill and you get an error
select * from dev.people
join dev.addresses on people.id = addresses.peopleid
left join dev.defaultAddress on defaultAddress.peopleid = people.id and defaultaddress.addressesid = addresses.id
insert into dev.defaultaddress (peopleId, addressesId)
select 1,2
GO
You could do it through an instead of trigger, or if you want it as a constraint create a constraint that references a function that checks for a row that has the default set to 1
EDIT oops, needs to be <=
Create table mytable(id1 int, defaultX bit not null default(0))
go
create Function dbo.fx_DefaultExists()
returns int as
Begin
Declare #Ret int
Set #ret = 0
Select #ret = count(1) from mytable
Where defaultX = 1
Return #ret
End
GO
Alter table mytable add
CONSTRAINT [CHK_DEFAULT_SET] CHECK
(([dbo].fx_DefaultExists()<=(1)))
GO
Insert into mytable (id1, defaultX) values (1,1)
Insert into mytable (id1, defaultX) values (2,1)
This is a fairly complex process that cannot be handled through a simple constraint.
We do this through a trigger. However before you write the trigger you need to be able to answer several things:
do we want to fail the insert if a default exists, change it to 0 instead of 1 or change the existing default to 0 and leave this one as 1?
what do we want to do if the default record is deleted and other non default records are still there? Do we make one the default, if so how do we determine which one?
You will also need to be very, very careful to make the trigger handle multiple row processing. For instance a client might decide that all of the records of a particular type should be the default. You wouldn't change a million records one at a time, so this trigger needs to be able to handle that. It also needs to handle that without looping or the use of a cursor (you really don't want the type of transaction discussed above to take hours locking up the table the whole time).
You also need a very extensive tesing scenario for this trigger before it goes live. You need to test:
adding a record with no default and it is the first record for that customer
adding a record with a default and it is the first record for that customer
adding a record with no default and it is the not the first record for that customer
adding a record with a default and it is the not the first record for that customer
Updating a record to have the default when no other record has it (assuming you don't require one record to always be set as the deafault)
Updating a record to remove the default
Deleting the record with the deafult
Deleting a record without the default
Performing a mass insert with multiple situations in the data including two records which both have isdefault set to 1 and all of the situations tested when running individual record inserts
Performing a mass update with multiple situations in the data including two records which both have isdefault set to 1 and all of the situations tested when running individual record updates
Performing a mass delete with multiple situations in the data including two records which both have isdefault set to 1 and all of the situations tested when running individual record deletes
#Andy Jones gave an answer above closest to mine, but bearing in mind the Rule of Three, I placed the logic directly in the stored proc that updates this table. This was my simple solution. If I need to update the table from elsewhere, I will move the logic to a trigger. The one default rule applies to each set of records specified by a FormID and a ConfigID:
ALTER proc [dbo].[cpForm_UpdateLinkedReport]
#reportLinkId int,
#defaultYN bit,
#linkName nvarchar(150)
as
if #defaultYN = 1
begin
declare #formId int, #configId int
select #formId = FormID, #configId = ConfigID from csReportLink where ReportLinkID = #reportLinkId
update csReportLink set DefaultYN = 0 where isnull(ConfigID, #configId) = #configId and FormID = #formId
end
update
csReportLink
set
DefaultYN = #defaultYN,
LinkName = #linkName
where
ReportLinkID = #reportLinkId