SQL unique constraint of a max of two entries per user - sql

Is it possible to generate a unique constraint in SQL that will allow a single user (user_id) up to two entries that are enabled (enabled)? An example is as follows
user_id | enabled
------------------
123 | true
123 | true
123 | false
456 | true
The above would be valid, but adding another user_id = 123 and enabled = true would fail because there would be three entries. Additionally adding user_id = 123 and enabled = false would be valid because the table would still satisfy the rule.

You could make it work by adding another boolean column to the UNIQUE or PRIMARY KEY constraint (or UNIQUE index):
CREATE TABLE tbl (
user_id int
, enabled bool
, enabled_first bool DEFAULT true
, PRIMARY KEY (user_id, enabled, enabled_first)
);
enabled_first tags the first of each instance with true. I made it DEFAULT true to allow simple insert for the first enabled per user_id - without mentioning the added enabled_first. An explicit enabled_first = false is required to insert a second instance.
NULL values are excluded automatically by the PK constraint I used. Be aware that a simple UNIQUE constraint still allows NULL values, working around your desired constraint. You would have to define all three columns NOT NULL additionally. See:
Allow null in unique column
db<>fiddle here
Of course, now the two true / false values are different internally, and you need to adjust write operations. This may or may not be acceptable. May even be desirable.
Welcome side-effect: Since the minimum payload (actual data size) is 8 bytes per index tuple, and boolean occupies 1 byte without requiring alignment padding, the index is still the same minimum size as for just (user_id, enabled).
Similar for the table: the added boolean does not increase physical storage. (May not apply for tables with more columns.) See:
Calculating and saving space in PostgreSQL
Is a composite index also good for queries on the first field?

You cannot allow two values of "enabled". But here is a solution that comes close to what you want without using triggers. The idea is to encode the value as numbers and enforce uniqueness on two of the values:
create table t (
user_id int,
enabled_code int,
is_enabled boolean as (enabled_code <> 0),
check (enabled_code in (0, 1, 2))
);
create unique index unq_t_enabled_code_1
on t(user_id, enabled_code)
where enabled_code = 1;
create unique index unq_t_enabled_code_2
on t(user_id, enabled_code)
where enabled_code = 2;
Inserting new values is a bit tricky, because you need to check if the value goes in slot "1" or "2". However, you can use is_enabled as the boolean value for querying.

It has been explained already that a constraint or unique index only cannot enforce the logic that you want.
An alternative approach would be to use a materialized view. The logic is to use window functions to create an additional column in the view that resets every two rows having the same (user_id, enabled). You can then put a unique partial index on that column. Finally, you can create a trigger that refreshes the view everytime a record is inserted or updated, which effectively enforces the unique constraint.
-- table set-up
create table mytable(user_id int, enabled boolean);
-- materialized view set-up
create materialized view myview as
select
user_id,
enabled,
(row_number() over(partition by user_id, enabled) - 1) % 2 rn
from mytable;
-- unique partial index that enforces integrity
create unique index on myview(user_id, rn) where(enabled);
-- trigger code
create or replace function refresh_myview()
returns trigger language plpgsql
as $$
begin
refresh materialized view myview;
return null;
end$$;
create trigger refresh_myview
after insert or update
on mytable for each row
execute procedure refresh_myview();
With this set-up in place, let's insert the initial content:
insert into mytable values
(123, true),
(123, true),
(234, false),
(234, true);
This works, and the content of the view is now:
user_id | enabled | rn
------: | :------ | -:
123 | t | 0
123 | t | 1
234 | f | 0
234 | t | 0
Now if we try to insert a row that violates the constraint, an error is raised, and the insert is rejected.
insert into mytable values(123, true);
-- ERROR: could not create unique index "myview_user_id_rn_idx"
-- DETAIL: Key (user_id, rn)=(123, 0) is duplicated.
-- CONTEXT: SQL statement "refresh materialized view myview"
-- PL/pgSQL function refresh_myview() line 3 at SQL statement
Demo on DB Fiddle

Related

Why does SQL Server populate new fields in existing rows in some environments and not others?

I am using MS SQL Server 2012. I have this bit of SQL:
alter table SomeTable
add Active bit not null default 1
In some environments the default value is applied to existing rows and in other environments we have to add an update script to set the new field to 1. Naturally I am thinking that the difference is a SQL Server setting but my searches thus far are not suggesting which one. Any suggestions?
Let me know if the values of particular settings are desired.
Edit: In the environments that don't apply the default the existing rows are set to 0, which at least conforms to the NOT NULL.
If you add the column as not null it will be set to the default value for existing rows.
If you add the column as null it will be null despite having a default constraint when added to the table.
For example:
create table SomeTable (id int);
insert into SomeTable values (1);
alter table SomeTable add Active_NotNull bit not null default 1;
alter table SomeTable add Active_Null bit null default 1;
select * from SomeTable;
returns:
+----+----------------+-------------+
| id | Active_NotNull | Active_Null |
+----+----------------+-------------+
| 1 | 1 | NULL |
+----+----------------+-------------+
dbfiddle.uk demo: http://dbfiddle.uk/?rdbms=sqlserver_2016&fiddle=c4aeea808684de48097ff44d391c9954
Default value will be applied to existing row to avoid violation of "NOT NULL" constraint.

PostgreSQL INSERT or UPDATE values given a SELECT result after a trigger has been hit

Here is my structure (with values):
user_eval_history table
user_eval_id | user_id | is_good_eval
--------------+---------+--------------
1 | 1 | t
2 | 1 | t
3 | 1 | f
4 | 2 | t
user_metrics table
user_metrics_id | user_id | nb_good_eval | nb_bad_eval
-----------------+---------+--------------+-------------
1 | 1 | 2 | 1
2 | 2 | 1 | 0
For access time (performance) reasons I want to avoid recomputing user evaluation from the history again and again.
I would like to store/update the sums of evaluations (for a given user) everytime a new evaluation is given to the user (meaning everytime there is an INSERT in the user_eval_history table I want to update the user_metrics table for the corresponding user_id).
I feel like I can achieve this with a trigger and a stored procedure but I'm not able to find the correct syntax for this.
I think I need to do what follows:
1. Create a trigger on user metrics:
CREATE TRIGGER update_user_metrics_trigger AFTER INSERT
ON user_eval_history
FOR EACH ROW
EXECUTE PROCEDURE update_user_metrics('user_id');
2. Create a stored procedure update_user_metrics that
2.1 Computes the metrics from the user_eval_history table for user_id
SELECT
user_id,
SUM( CASE WHEN is_good_eval='t' THEN 1 ELSE 0) as nb_good_eval,
SUM( CASE WHEN is_good_eval='f' THEN 1 ELSE 0) as nb_bad_eval
FROM user_eval_history
WHERE user_id = 'user_id' -- don't know the syntax here
2.2.1 Creates the entry into user_metrics if not already existing
INSERT INTO user_metrics
(user_id, nb_good_eval, nb_bad_eval) VALUES
(user_id, nb_good_eval, nb_bad_eval) -- Syntax?????
2.2.2 Updates the user_metrics entry if already existing
UPDATE user_metrics SET
(user_id, nb_good_eval, nb_bad_eval) = (user_id, nb_good_eval, nb_bad_eval)
I think I'm close to what is needed but don't know how to achieve this. Especially I don't know about the syntax.
Any idea?
Note: Please, no "RTFM" answers, I looked up for hours and didn't find anything but trivial examples.
First, revisit the assumption that maintaining an always current materialized view is a significant performance gain. You add a lot of overhead and make writes to user_eval_history a lot more expensive. The approach only makes sense if writes are rare while reads are more common. Else, consider a VIEW instead, which is more expensive for reads, but always current. With appropriate indexes on user_eval_history this may be cheaper overall.
Next, consider an actual MATERIALIZED VIEW (Postgres 9.3+) for user_metrics instead of keeping it up to date manually, especially if write operations to user_eval_history are very rare. The tricky part is when to refresh the MV.
Your approach makes sense if you are somewhere in between, user_eval_history has a non-trivial size and you need user_metrics to reflect the current state exactly and close to real-time.
Still on board? OK. First you need to define exactly what's allowed / possible and what's not. Can rows in user_eval_history be deleted? Can the last row of a user in user_eval_history be deleted? Probably yes, even if you would answer "No". Can rows in user_eval_history be updated? Can user_id be changed? Can is_good_eval be changed? If yes, you need to prepare for each of these cases.
Assuming the trivial case: INSERT only. No UPDATE, no DELETE. There is still the possible race condition you have been discussing with #sn00k4h. You found an answer to that, but that's really for INSERT or SELECT, while you have a classical UPSERT problem: INSERT or UPDATE:
FOR UPDATE like you considered in the comments is not the silver bullet here. UPDATE user_metrics ... locks the row it updates anyway. The problematic case is when two INSERTs try to create a row for a new user_id concurrently. You cannot lock key values that are not present in the unique index, yet, in Postgres. FOR UPDATE can't help. You need to prepare for a possible unique violation and retry as discussed in these linked answers:
Upsert with a transaction
How to UPSERT (MERGE, INSERT ... ON DUPLICATE UPDATE) in PostgreSQL?
Code
Assuming these table definitions:
CREATE TABLE user_eval_history (
user_eval_id serial PRIMARY KEY
, user_id int NOT NULL
, is_good_eval boolean NOT NULL
);
CREATE TABLE user_metrics (
user_metrics_id -- seems useless
, user_id int PRIMARY KEY
, nb_good_eval int NOT NULL DEFAULT 0
, nb_bad_eval int NOT NULL DEFAULT 0
);
First, you need a trigger function before you can create a trigger.
CREATE OR REPLACE FUNCTION trg_user_eval_history_upaft()
RETURNS trigger
LANGUAGE plpgsql AS
$func$
BEGIN
LOOP
IF NEW.is_good_eval THEN
UPDATE user_metrics
SET nb_good_eval = nb_good_eval + 1
WHERE user_id = NEW.user_id;
ELSE
UPDATE user_metrics
SET nb_bad_eval = nb_bad_eval + 1
WHERE user_id = NEW.user_id;
END IF;
EXIT WHEN FOUND;
BEGIN -- enter block with exception handling
IF NEW.is_good_eval THEN
INSERT INTO user_metrics (user_id, nb_good_eval)
VALUES (NEW.user_id, 1);
ELSE
INSERT INTO user_metrics (user_id, nb_bad_eval)
VALUES (NEW.user_id, 1);
END IF;
RETURN NULL; -- returns from function, NULL for AFTER trigger
EXCEPTION WHEN UNIQUE_VIOLATION THEN -- user_metrics.user_id is UNIQUE
RAISE NOTICE 'It actually happened!'; -- hardly ever happens
END;
END LOOP;
RETURN NULL; -- NULL for AFTER trigger
END
$func$;
In particular, you don't pass user_id as parameter to the trigger function. The special variable NEW holds values of the triggering row automatically. Details in the manual here.
Trigger:
CREATE TRIGGER upaft_update_user_metrics
AFTER INSERT ON user_eval_history
FOR EACH ROW EXECUTE PROCEDURE trg_user_eval_history_upaft();

Incrementing with one query a set of values in a field with UNIQUE constraint, Postgres

I have a table in which I have a numeric field A, which is set to be UNIQUE. This field is used to indicate an order in which some action has to be performed. I want to make an UPDATE of all the values that are greater, for example, than 3. For example,
I have
A
1
2
3
4
5
Now, I want to add 1 to all values of A greater than 3. So, the result would be
A
1
2
3
5
6
The question is, whether it is possible to be done using only one query? Remember that I have a UNIQUE constraint on the column A.
Obviously, I tried
UPDATE my_table SET A = A + 1 WHERE A > 3;
but it did not work as I have the constraint on this field.
PostgreSQL 9.0 and later
PostgreSQL 9.0 added deferrable unique constraints, which is exactly the feature you seem to need. This way, uniqueness is checked at commit-time rather than update-time.
Create the UNIQUE constraint with the DEFERRABLE keyword:
ALTER TABLE foo ADD CONSTRAINT foo_uniq (foo_id) DEFERRABLE;
Later, before running the UPDATE statement, you run in the same transaction:
SET CONSTRAINTS foo_uniq DEFERRED;
Alternatively you can create the constraint with the INITIALLY DEFERRED keyword on the unique constraint itself -- so you don't have to run SET CONSTRAINTS -- but this might affect the performance of your other queries which don't need to defer the constraint.
PostgreSQL 8.4 and older
If you only want to use the unique constraint for guaranteeing uniqueness -- not as a target for a foreign key -- then this workaround might help:
First, add a boolean column such as is_temporary to the table that temporarily distinguishes updated and non-updated rows:
CREATE TABLE foo (value int not null, is_temporary bool not null default false);
Next create a partial unique index that only affects rows where is_temporary=false:
CREATE UNIQUE INDEX ON foo (value) WHERE is_temporary=false;
Now, every time do make the updates you described, you run them in two steps:
UPDATE foo SET is_temporary=true, value=value+1 WHERE value>3;
UPDATE foo SET is_temporary=false WHERE is_temporary=true;
As long as these statements occur in a single transaction, this will be totally safe -- other sessions will never see the temporary rows. The downside is that you'll be writing the rows twice.
Do note that this is merely a unique index, not a constraint, but in practice it shouldn't matter.
You can do it in 2 queries with a simple trick :
First, update your column with +1, but add a with a x(-1) factor :
update my_table set A=(A+1)*-1 where A > 3.
You will swtich from 4,5,6 to -5,-6,-7
Second, convert back the operation to restore positive :
update my_table set A=(A)*-1 where A < 0.
You will have : 5,6,7
You can do this with a loop. I don't like this solution, but it works:
CREATE TABLE update_unique (id INT NOT NULL);
CREATE UNIQUE INDEX ux_id ON update_unique (id);
INSERT INTO update_unique(id) SELECT a FROM generate_series(1,100) AS foo(a);
DO $$
DECLARE v INT;
BEGIN
FOR v IN SELECT id FROM update_unique WHERE id > 3 ORDER BY id DESC
LOOP
UPDATE update_unique SET id = id + 1 WHERE id = v;
END LOOP;
END;
$$ LANGUAGE 'plpgsql';
In PostgreSQL 8.4, you might have to create a function to do this since you can't arbitrarily run PL/PGSQL from a prompt using DO (at least not to the best of my recollection).

postgres: How to prevent INSERT in a special case

I got a table 'foo' that looks like
ID | NAME
------+----------------------------
123 | PiratesAreCool
254 | NinjasAreCoolerThanPirates
and a second table 'bar'
SID | ID | created | dropped
------+------+------------+-----------
9871 | 123 | 03.24.2009 | 03.26.2009
9872 | 123 | 04.02.2009 |
bar.ID is a reference (foreign key) to foo.ID.
Now I want to prevent that you can insert a new record to 'bar' when there is a record with the same ID and bar.dropped is null on that record.
So, when the 'bar' looks like above
INSERT INTO BAR VALUES ('9873','123','07.24.2009',NULL);
should be forbidden, but
INSERT INTO BAR VALUES ('9873','254','07.24.2009',NULL);
should be allowed (because there is no 'open' bar-record for 'NinjasAreCoolerThanPirates').
How do i do that?
I hope my problem is clear and somebody can help me.
hmm, that should be enough to just create a unique index.
create unique index ix_open_bar on bar (id, dropped);
of course, that would also have the effect that you can not drop a bar twice per day (unless the dropped is a timestamp which would minimize the risk)
Actually, I noticed that Postgres have support for partial indexes:
create unique index ix_open_bar on bar (id) where dropped is null;
Update:
After some tests, the unique constraint is not enforced on null values, but the partial indexes will still work.
And if you don't want to use the partial indexes, this might work as well:
create unique index ix_open_bar on bar(id, coalesce(dropped, 'NULL'));
However, when using coalesce, you need to have the same datatypes on them (so if dropped is a timestamp, you need to change 'NULL' to a timestamp value instead).
This will only insert a record if there isn't an 'open' record in bar for your id
INSERT INTO bar
SELECT '9873','254','07.24.2009',NULL
WHERE NOT EXISTS(SELECT 1 FROM bar WHERE ID='254' AND dropped IS NULL)
Set up a trigger on the table bar on insert that checks to see if the current row's ID is present in the table already and reject it if so.
I don't know the specific postgres syntax, but it should work something like this:
CREATE TRIGGER trigger_name BEFORE INSERT ON bar
IF EXISTS (
SELECT 1
FROM bar
WHERE bar.ID = inserted.ID
AND bar.dropped IS NULL
)
BEGIN
// raise an error or reject or whatever Postgres calls it.
END
And then whenever you try to insert into bar, this trigger will check if something already exists and reject it if so. If bar.dropped isn't null, it'll allow the insert just fine.
If someone knows the right syntax for this, please feel free to edit my answer.
You can create a partial index with a WHERE clause. For your purposes this might do;
CREATE UNIQUE INDEX my_check on bar(id) where dropped is null;
Assuming id 124 does NOT exists in the table, this will be allowed , but only ONE record can have dropped=NULL for a given ID:
INSERT INTO BAR VALUES ('9873','124','07.24.2009',NULL);
And this will be allowed wether or not 124 already exists
INSERT INTO BAR VALUES ('9873','124','07.24.2009','07.24.2009');
If 125 already exists, this will not be allowd
INSERT INTO BAR VALUES ('9873','125','07.24.2009',NULL);
But this will
INSERT INTO BAR VALUES ('9873','125','07.24.2009','07.24.2009');

Swap unique indexed column values in database

I have a database table and one of the fields (not the primary key) is having a unique index on it. Now I want to swap values under this column for two rows. How could this be done? Two hacks I know are:
Delete both rows and re-insert them.
Update rows with some other value
and swap and then update to actual value.
But I don't want to go for these as they do not seem to be the appropriate solution to the problem.
Could anyone help me out?
The magic word is DEFERRABLE here:
DROP TABLE ztable CASCADE;
CREATE TABLE ztable
( id integer NOT NULL PRIMARY KEY
, payload varchar
);
INSERT INTO ztable(id,payload) VALUES (1,'one' ), (2,'two' ), (3,'three' );
SELECT * FROM ztable;
-- This works, because there is no constraint
UPDATE ztable t1
SET payload=t2.payload
FROM ztable t2
WHERE t1.id IN (2,3)
AND t2.id IN (2,3)
AND t1.id <> t2.id
;
SELECT * FROM ztable;
ALTER TABLE ztable ADD CONSTRAINT OMG_WTF UNIQUE (payload)
DEFERRABLE INITIALLY DEFERRED
;
-- This should also work, because the constraint
-- is deferred until "commit time"
UPDATE ztable t1
SET payload=t2.payload
FROM ztable t2
WHERE t1.id IN (2,3)
AND t2.id IN (2,3)
AND t1.id <> t2.id
;
SELECT * FROM ztable;
RESULT:
DROP TABLE
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "ztable_pkey" for table "ztable"
CREATE TABLE
INSERT 0 3
id | payload
----+---------
1 | one
2 | two
3 | three
(3 rows)
UPDATE 2
id | payload
----+---------
1 | one
2 | three
3 | two
(3 rows)
NOTICE: ALTER TABLE / ADD UNIQUE will create implicit index "omg_wtf" for table "ztable"
ALTER TABLE
UPDATE 2
id | payload
----+---------
1 | one
2 | two
3 | three
(3 rows)
I think you should go for solution 2. There is no 'swap' function in any SQL variant I know of.
If you need to do this regularly, I suggest solution 1, depending on how other parts of the software are using this data. You can have locking issues if you're not careful.
But in short: there is no other solution than the ones you provided.
Further to Andy Irving's answer
this worked for me (on SQL Server 2005) in a similar situation
where I have a composite key and I need to swap a field which is part of the unique constraint.
key: pID, LNUM
rec1: 10, 0
rec2: 10, 1
rec3: 10, 2
and I need to swap LNUM so that the result is
key: pID, LNUM
rec1: 10, 1
rec2: 10, 2
rec3: 10, 0
the SQL needed:
UPDATE DOCDATA
SET LNUM = CASE LNUM
WHEN 0 THEN 1
WHEN 1 THEN 2
WHEN 2 THEN 0
END
WHERE (pID = 10)
AND (LNUM IN (0, 1, 2))
There is another approach that works with SQL Server: use a temp table join to it in your UPDATE statement.
The problem is caused by having two rows with the same value at the same time, but if you update both rows at once (to their new, unique values), there is no constraint violation.
Pseudo-code:
-- setup initial data values:
insert into data_table(id, name) values(1, 'A')
insert into data_table(id, name) values(2, 'B')
-- create temp table that matches live table
select top 0 * into #tmp_data_table from data_table
-- insert records to be swapped
insert into #tmp_data_table(id, name) values(1, 'B')
insert into #tmp_data_table(id, name) values(2, 'A')
-- update both rows at once! No index violations!
update data_table set name = #tmp_data_table.name
from data_table join #tmp_data_table on (data_table.id = #tmp_data_table.id)
Thanks to Rich H for this technique.
- Mark
Assuming you know the PK of the two rows you want to update... This works in SQL Server, can't speak for other products. SQL is (supposed to be) atomic at the statement level:
CREATE TABLE testing
(
cola int NOT NULL,
colb CHAR(1) NOT NULL
);
CREATE UNIQUE INDEX UIX_testing_a ON testing(colb);
INSERT INTO testing VALUES (1, 'b');
INSERT INTO testing VALUES (2, 'a');
SELECT * FROM testing;
UPDATE testing
SET colb = CASE cola WHEN 1 THEN 'a'
WHEN 2 THEN 'b'
END
WHERE cola IN (1,2);
SELECT * FROM testing;
so you will go from:
cola colb
------------
1 b
2 a
to:
cola colb
------------
1 a
2 b
I also think that #2 is the best bet, though I would be sure to wrap it in a transaction in case something goes wrong mid-update.
An alternative (since you asked) to updating the Unique Index values with different values would be to update all of the other values in the rows to that of the other row. Doing this means that you could leave the Unique Index values alone, and in the end, you end up with the data that you want. Be careful though, in case some other table references this table in a Foreign Key relationship, that all of the relationships in the DB remain intact.
I have the same problem. Here's my proposed approach in PostgreSQL. In my case, my unique index is a sequence value, defining an explicit user-order on my rows. The user will shuffle rows around in a web-app, then submit the changes.
I'm planning to add a "before" trigger. In that trigger, whenever my unique index value is updated, I will look to see if any other row already holds my new value. If so, I will give them my old value, and effectively steal the value off them.
I'm hoping that PostgreSQL will allow me to do this shuffle in the before trigger.
I'll post back and let you know my mileage.
In SQL Server, the MERGE statement can update rows that would normally break a UNIQUE KEY/INDEX. (Just tested this because I was curious.)
However, you'd have to use a temp table/variable to supply MERGE w/ the necessary rows.
For Oracle there is an option, DEFERRED, but you have to add it to your constraint.
SET CONSTRAINT emp_no_fk_par DEFERRED;
To defer ALL constraints that are deferrable during the entire session, you can use the ALTER SESSION SET constraints=DEFERRED statement.
Source
I usually think of a value that absolutely no index in my table could have. Usually - for unique column values - it's really easy. For example, for values of column 'position' (information about the order of several elements) it's 0.
Then you can copy value A to a variable, update it with value B and then set value B from your variable. Two queries, I know no better solution though.
Oracle has deferred integrity checking which solves exactly this, but it is not available in either SQL Server or MySQL.
1) switch the ids for name
id student
1 Abbot
2 Doris
3 Emerson
4 Green
5 Jeames
For the sample input, the output is:
id student
1 Doris
2 Abbot
3 Green
4 Emerson
5 Jeames
"in case n number of rows how will manage......"