Unique constraint on inserting new row - sql

I wrote a SQL statement within PostgreSQL 12 and I first created an unique constraint like:
CONSTRAINT post_comment_response_approval__tm_response__uidx UNIQUE (post_comment_response_id, team_member_id)
On a SQL query:
INSERT INTO post_comment_response_approval (post_comment_response_id, team_member_id, approved, note)
VALUES (:postCommentResponseId, :workspaceMemberId, :approved, :note)
ON CONFLICT ON CONSTRAINT post_comment_response_approval__tm_response__uidx DO
UPDATE SET approved = :approved, note = :note
Fist, I wanted to use it for the same row whenever ever some action is made, but now I just want to make sure the API shows them if multiple actions have been submitted by the same member.
An example is that someone might suggest a change, then that change is made, then that person who suggested it later approves it. That would generate multiple post_comment_response_approval rows for that approver.
Is there a way to make it happen without removing unique constraint or maybe it should be deleted? I am new with PostgreSQL.

I didn't understand your question in detail. But I think I understood what you need. You can use PostgreSQL partial indexing.
Examples for you:
CREATE TABLE table6 (
id int4 NOT NULL,
id2 int4 NOT null,
approve bool NULL
);
-- creating partial indexing
CREATE UNIQUE INDEX table6_id_idx ON table6 (id,id2) where approve is true;
insert into table6 (id, id2, approve) values (1, 1, false);
-- success insert
insert into table6 (id, id2, approve) values (1, 1, false);
-- success insert
insert into table6 (id, id2, approve) values (1, 1, false);
-- success insert
insert into table6 (id, id2, approve) values (1, 1, true);
-- success insert
insert into table6 (id, id2, approve) values (1, 1, true);
-- error: duplicate key value violates unique constraint "table6_id_idx"
So, you get unique fields by condition.

Related

Uniqe constrain on condition psql

I required to implement unique constrain on psql table
i have columns,
1) date
2) employee
3) client_id
4) start_time
trying to add two constrain like,
1) check unique rule for date, employee, start_time, client_id this will simply work with unique constrain
BUT SECOND CONSTRAIN Is for condition where we already create entry with date,employee,start_time and client_id is False
-> so if someone try to create same entry we require to check constrain like,
does any entry already exist with fields "date, employee_id,start_time" AND client_id= False
in SIMPLE words
1) check if all 4 fields exist with unique constrain > display warning record exist
2) check if record with 3 fields and client_id = null exist > display warning record exist assign client_id
if anybody have little hint
it would be helpful
I think you need partial indexes. One will cover case when client_id is provided and other will deal with NULL client_id.
create table uni(val1 int, val2 text, val3 date, client_id int);
create unique index record_exists on uni(val1, val2, val3, client_id)
where client_id is not null;
create unique index record_exists_assign_client_id on uni(val1, val2, val3)
where client_id is null;
insert into uni values (1, 'test', current_date, 43), (2, 'test2', current_date, null);
--OK
insert into uni values (1, 'test', current_date, 43);
--duplicate key value violates unique constraint "record_exists"
insert into uni values (1, 'test', current_date, null);
--OK
insert into uni values (2, 'test2', current_date, null);
--duplicate key value violates unique constraint "record_exists_assign_client_id"

Simulating UPSERT in presence of UNIQUE constraints

Simulating UPSERT was already discusssed before. In my case though, I have PRIMARY KEY and additional UNIQUE constraint, and I want upsert semantic with respect to primary key - replacing existing row if it exists, while checking the unique constraint.
Here's an attempt using insert-or-replace:
drop table if exists test;
create table test (id INTEGER, name TEXT, s INTEGER,
PRIMARY KEY (id, s),
UNIQUE (name, s));
insert or replace into test values (1, "a", 0);
insert or replace into test values (1, "a", 0);
insert or replace into test values (2, "b", 0);
insert or replace into test values (2, "a", 0);
The last statement is replaces both rows. This is documented behavior of 'insert or replace', but not what I want.
Here is an attempt with "on conflict replace":
drop table if exists test;
create table test (id INTEGER, name TEXT, s INTEGER,
PRIMARY KEY (id, s) on conflict replace,
UNIQUE (name, s));
insert into test values (1, "a", 0);
insert into test values (1, "a", 0);
I get "UNIQUE constraint failed" right away. The problem disappears if don't share column between both primary key and unique constraint:
drop table if exists test;
create table test (id INTEGER, name TEXT,
PRIMARY KEY (id) on conflict replace,
UNIQUE (name));
insert into test values (1, "a");
insert into test values (1, "a");
insert into test values (2, "b");
insert into test values (2, "a");
Here, I get constraint violation on the very last statement, which is precisely right. Sadly, I do need to share a column between constraints.
Is this something I don't understand about SQL, or SQLite issue, and how do I get the desired effect, except for first trying insert and then doing update on failure?
Can you try to apply the ON CONFLICT REPLACE clause to the UNIQUE constraint also?
create table test (id INTEGER, name TEXT,
PRIMARY KEY (id) on conflict replace,
UNIQUE (name) on conflict replace);
SQLite is an embedded database without client/server communication overhead, so it is not necessary to try to do this in a single statement.
To simulate UPSERT, just execute the UPDATE/INSERT statements separately:
c.execute("UPDATE test SET s = ? WHERE id = ? AND name = ?", [0, 1, "a"])
if c.rowcount == 0:
c.execute("INSERT INTO test(s, id, name) VALUES (?, ?, ?)", [0, 1, "a"])
Since SQLite 3.24.0, you can just use UPSERT.

Can a postgres check *conditionally* restrict duplicates?

I have to implement a rule that only one 'group' with a given ID can have status 'In Progress' or 'Problem' at one time. Is it possible to represent this in a Postgres check, or would I have to resort to logic in the application?
For example:
INSERT INTO group (group_id, status) VALUES (1, 'In Progress'); -- okay
INSERT INTO group (group_id, status) VALUES (2, 'Problem'); -- okay
INSERT INTO group (group_id, status) VALUES (3, 'In Progress'); -- okay
INSERT INTO group (group_id, status) VALUES (4, 'Problem'); -- okay
INSERT INTO group (group_id, status) VALUES (1, 'Something else'); -- okay
INSERT INTO group (group_id, status) VALUES (2, 'Foo bar'); -- okay
INSERT INTO group (group_id, status) VALUES (1, 'In Progress'); -- should fail
INSERT INTO group (group_id, status) VALUES (1, 'Problem'); -- should fail
INSERT INTO group (group_id, status) VALUES (2, 'In Progress'); -- should fail
INSERT INTO group (group_id, status) VALUES (2, 'Problem'); -- should fail
I think you need a partial unique index.
CREATE UNIQUE INDEX group_status_unq_idx
ON "group"(group_id, status) WHERE (status IN ('In Progress', 'Problem'));
However, it isn't clear to me from your description why the second expected-failure would fail. Do you want to allow only one of In Progress or Problem for any given group_id? If so, you could write something like:
CREATE UNIQUE INDEX group_status_unq_idx
ON "group"(group_id) WHERE (status IN ('In Progress', 'Problem'));
... omitting the status from the partial unique index and using it only for the predicate. See this SQLFiddle.
Note that this cannot be expressed as a UNIQUE constraint, since unique constraints do not take a predicate. A PostgreSQL unique constraint is implemented using a (non-partial) UNIQUE index, but it also creates metadata entries that the mere index creation does not. A partial unique index works like a unique constraint with a predicate, but it won't be discoverable via metadata like information_schema's constraint info.
If I understand correctly:
create unique index group_restriction_index on group
(status) where status in ('In Progress', 'Problem')
You can make a unique constraint on multiple columns:
CREATE TABLE group (
group_id integer,
status char(100),
UNIQUE (group_id, status)
);
(From the Postgresql documentation).
This would prevent duplicates for any status, though, not just 'In Progress' and 'Problem'. I'm not sure if that is what you want.

SQL can I have a "conditionally unique" constraint on a table?

I've had this come up a couple times in my career, and none of my local peers seems to be able to answer it. Say I have a table that has a "Description" field which is a candidate key, except that sometimes a user will stop halfway through the process. So for maybe 25% of the records this value is null, but for all that are not NULL, it must be unique.
Another example might be a table which must maintain multiple "versions" of a record, and a bit value indicates which one is the "active" one. So the "candidate key" is always populated, but there may be three versions that are identical (with 0 in the active bit) and only one that is active (1 in the active bit).
I have alternate methods to solve these problems (in the first case, enforce the rule code, either in the stored procedure or business layer, and in the second, populate an archive table with a trigger and UNION the tables when I need a history). I don't want alternatives (unless there are demonstrably better solutions), I'm just wondering if any flavor of SQL can express "conditional uniqueness" in this way. I'm using MS SQL, so if there's a way to do it in that, great. I'm mostly just academically interested in the problem.
If you are using SQL Server 2008 a Index filter would maybe your solution:
http://msdn.microsoft.com/en-us/library/ms188783.aspx
This is how I enforce a Unique Index with multiple NULL values
CREATE UNIQUE INDEX [IDX_Blah] ON [tblBlah] ([MyCol]) WHERE [MyCol] IS NOT NULL
In the case of descriptions which are not yet completed, I wouldn't have those in the same table as the finalized descriptions. The final table would then have a unique index or primary key on the description.
In the case of the active/inactive, again I might have separate tables as you did with an "archive" or "history" table, but another possible way to do it in MS SQL Server at least is through the use of an indexed view:
CREATE TABLE Test_Conditionally_Unique
(
my_id INT NOT NULL,
active BIT NOT NULL DEFAULT 0
)
GO
CREATE VIEW dbo.Test_Conditionally_Unique_View
WITH SCHEMABINDING
AS
SELECT
my_id
FROM
dbo.Test_Conditionally_Unique
WHERE
active = 1
GO
CREATE UNIQUE CLUSTERED INDEX IDX1 ON Test_Conditionally_Unique_View (my_id)
GO
INSERT INTO dbo.Test_Conditionally_Unique (my_id, active)
VALUES (1, 0)
INSERT INTO dbo.Test_Conditionally_Unique (my_id, active)
VALUES (1, 0)
INSERT INTO dbo.Test_Conditionally_Unique (my_id, active)
VALUES (1, 0)
INSERT INTO dbo.Test_Conditionally_Unique (my_id, active)
VALUES (1, 1)
INSERT INTO dbo.Test_Conditionally_Unique (my_id, active)
VALUES (2, 0)
INSERT INTO dbo.Test_Conditionally_Unique (my_id, active)
VALUES (2, 1)
INSERT INTO dbo.Test_Conditionally_Unique (my_id, active)
VALUES (2, 1) -- This insert will fail
You could use this same method for the NULL/Valued descriptions as well.
Thanks for the comments, the initial version of this answer was wrong.
Here's a trick using a computed column that effectively allows a nullable unique constraint in SQL Server:
create table NullAndUnique
(
id int identity,
name varchar(50),
uniqueName as case
when name is null then cast(id as varchar(51))
else name + '_' end,
unique(uniqueName)
)
insert into NullAndUnique default values
insert into NullAndUnique default values -- Works
insert into NullAndUnique default values -- not accidentally :)
insert into NullAndUnique (name) values ('Joel')
insert into NullAndUnique (name) values ('Joel') -- Boom!
It basically uses the id when the name is null. The + '_' is to avoid cases where name might be numeric, like 1, which could collide with the id.
I'm not entirely aware of your intended use or your tables, but you could try using a one to one relationship. Split out this "sometimes" unique column into a new table, create the UNIQUE index on that column in the new table and FK back to the original table using the original tables PK. Only have a row in this new table when the "unique" data is supposed to exist.
OLD tables:
TableA
ID pk
Col1 sometimes unique
Col...
NEW tables:
TableA
ID
Col...
TableB
ID PK, FK to TableA.ID
Col1 unique index
Oracle does. A fully null key is not indexed by a Btree in index in Oracle, and Oracle uses Btree indexes to enforce unique constraints.
Assuming one wished to version ID_COLUMN based on the ACTIVE_FLAG being set to 1:
CREATE UNIQUE INDEX idx_versioning_id ON mytable
(CASE active_flag WHEN 0 THEN NULL ELSE active_flag END,
CASE active_flag WHEN 0 THEN NULL ELSE id_column END);

Conditional composite key in MySQL?

So I have this table with a composite key, basically 'userID'-'data' must be unique (see my other question SQL table - semi-unique row?)
However, I was wondering if it was possible to make this only come into effect when userID is not zero? By that I mean, 'userID'-'data' must be unique for non-zero userIDs?
Or am I barking up the wrong tree?
Thanks
Mala
SQL constraints apply to every row in the table. You can't make them conditional based on certain data values.
However, if you could use NULL instead of zero, you can get around the unique constraint. A unique constraint allows multiple entries that have NULL. The reason is that uniqueness means no two equal values can exist. Equality means value1 = value2 must be true. But in SQL, NULL = NULL is unknown, not true.
CREATE TABLE MyTable (id SERIAL PRIMARY KEY, userid INT, data VARCHAR(64));
INSERT INTO MyTable (userid, data) VALUES ( 1, 'foo');
INSERT INTO MyTable (userid, data) VALUES ( 1, 'bar');
INSERT INTO MyTable (userid, data) VALUES (NULL, 'baz');
So far so good, now you might think the following statements would violate the unique constraint, but they don't:
INSERT INTO MyTable (userid, data) VALUES ( 1, 'baz');
INSERT INTO MyTable (userid, data) VALUES (NULL, 'foo');
INSERT INTO MyTable (userid, data) VALUES (NULL, 'baz');
INSERT INTO MyTable (userid, data) VALUES (NULL, 'baz');