Oracle Unique Constraint - Trigger to check value of property in new relation - sql

Hi I'm having trouble getting my sql syntax correct. I want to create a unique constraint that looks at the newly added foreign key, looks at some properties of the newly related entity to decided if the relationship is allowed.
CREATE or replace TRIGGER "New_Trigger"
AFTER INSERT OR UPDATE ON "Table_1"
FOR EACH ROW
BEGIN
Select "Table_2"."number"
(CASE "Table_2"."number" > 0
THEN RAISE_APPLICATION_ERROR(-20000, 'this is not allowed');
END)
from "Table_1"
WHERE "Table_2"."ID" = :new.FK_Table_2_ID
END;
Edit: APC answer is wonderfully comprehensive, however leads me to think im doing it in the wrong way.
The situation is I have a table of people with different privilege levels, and I want to check these privilege levels, e.g. A user, 'Bob', has low level privileges and he tries to become head of department which requires requires high privileges so the system prevents this happening.
There is a follow-up question which poses a related scenario but with a different data model. Find it here.

So the rule you want to enforce is that TABLE_1 can only reference TABLE_2 if some column in TABLE_2 is zero or less. Hmmm.... Let's sort out the trigger logic and then we'll discuss the rule.
The trigger should look like this:
CREATE or replace TRIGGER "New_Trigger"
AFTER INSERT OR UPDATE ON "Table_1"
FOR EACH ROW
declare
n "Table_2"."number".type%;
BEGIN
Select "Table_2"."number"
into n
from "Table_2"
WHERE "Table_2"."ID" = :new.FK_Table_2_ID;
if n > 0
THEN RAISE_APPLICATION_ERROR(-20000, 'this is not allowed');
end if;
END;
Note that your error message should include some helpful information such as the value of the TABLE_1 primary key, for when you are inserting or updating multiple rows on the table.
What you are trying to do here is to enforce a type of constraint known as an ASSERTION. Assertions are specified in the ANSI standard but Oracle has not implemented them. Nor has any other RDBMS, come to that.
Assertions are problematic because they are symmetrical. That is, the rule also needs to be enforced on TABLE_2. At the moment you check the rule when a record is created in TABLE_1. Suppose at some later time a user updates TABLE_2.NUMBER so it is greater than zero: your rule is now broken, but you won't know that it is broken until somebody issues a completely unrelated UPDATE on TABLE_1, which will then fail. Yuck.
So, what to do?
If the rule is actually
TABLE_1 can only reference TABLE_2 if
TABLE_2.NUMBER is zero
then you can enforce it without triggers.
Add a UNIQUE constraint on TABLE_2 for (ID, NUMBER); you need an additional constraint because ID remains the primary key for TABLE_2.
Add a dummy column on TABLE_1 called TABLE_2_NUMBER. Default it to zero and have a check constraint to ensure it is always zero. (If you are on 11g you should consider using a virtual column for this.)
Change the foreign key on TABLE_1 so (FK_Table_2_ID, TABLE_2_NUMBER) references the unique constraint rather than TABLE_2's primary key.
Drop the "New_Trigger" trigger; you don't need it anymore as the foreign key will prevent anybody updating TABLE_2.NUMBER to a value other than zero.
But if the rule is really as I formulated it at the top i.e.
TABLE_1 can only reference TABLE_2 if
TABLE_2.NUMBER is not greater than zero (i.e. negative values are okay)
then you need another trigger, this time on TABLE_2, to enforce it the other side of the rule.
CREATE or replace TRIGGER "Assertion_Trigger"
BEFORE UPDATE of "number" ON "Table_2"
FOR EACH ROW
declare
x pls_integer;
BEGIN
if :new."number" > 0
then
begin
Select 1
into x
from "Table_1"
WHERE "Table_1"."FK_Table_2_ID" = :new.ID
and rownum = 1;
RAISE_APPLICATION_ERROR(-20001, :new.ID
||' has dependent records in Table_1');
exception
when no_data_found then
null; -- this is what we want
end;
END;
This trigger will not allow you to update TABLE_2.NUMBER to a value greater than zero if it is referenced by records in TABLE_2. It only fires if the UPDATE statement touches TABLE_2.NUMBER to minimise the performance impact of executing the lookup.

Don't use a trigger to create a unique constraint or a foreign key constraint. Oracle has declarative support for unique and foreign keys, e.g.:
Add a unique constraint on a column:
ALTER TABLE "Table_1" ADD (
CONSTRAINT table_1_uk UNIQUE (column_name)
);
Add a foreign key relationship:
ALTER TABLE "ChildTable" ADD (
CONSTRAINT my_fk FOREIGN KEY (parent_id)
REFERENCES "ParentTable" (id)
);
I'm not clear on exactly what you're trying to achieve with your trigger - it's a bit of a mess of SQL and PL/SQL munged together which will not work, and seems to refer to a column on "Table_2" which is not actually queried.
A good rule of thumb is, if your trigger is querying the same table that the trigger is on, it's probably wrong.
I'm not sure, but are you after some kind of conditional foreign key relationship? i.e. "only allow child rows where the parent satisfies condition x"? If so, the problem is in the data model and should be fixed there. If you provide more explanation of what you're trying to achieve we should be able to help you.

Related

How do you add complex constraints on a table? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
In SQL Server (or it could be other SQL language I guess), how do you make sure that a table always verifies some complex constraint.
For instance, imagine a table has only column A and B
The constraint I want to enforce, is that all lines that contain the same value for column A (e.g. 'a'), also have the same value for column B (e.g. 'b'). For instance, this isn't allowed:
A
B
C
a1
b
c1
a1
c
c2
a2
d
c3
But this is allowed:
A
B
C
a1
b
c1
a1
b
c2
a2
d
c3
This is an example of constraint, but there could be more complex (i.e. depends on more columns etc). Here I think that this constraint could be expressed in terms of primary key etc, but let's assume it can't.
What is the way to enforce this?
What I can think about is, each time I want to do an insert, I instead go through a stored procedure.
The stored procedure would do an atomic transaction and check if the table would verify the constraint if updated, and "reject" the update if the constraint would be violated, but this seems quite complex.
Rough definitions, for this answer:
Simple constraint. A constraint that can be, without schema changes beyond the constraint definition, enforced by the straight forward use of one of SQL Server constraints: primary key, unique, foreign key, not null and check against a simple scalar expression. (I would include a unique filtered index as a simple constraint.)
Complex constraint. A constraint that can not be so enforced.
Just as SQL server provides different constraints to handle different situations there are different techniques for enforcing more complex constraints. Which technique is appropriate depends on what one is trying to accomplish and the context one is in.
Broadly, there are two approaches to enforcing complex constraints, either add code to that prevents bad data or change the schema in such a way that SQL Server can enforce the constraint with a simple constraint.
Add code to enforce the constraint.
A common refrain is to use a trigger to enforce a constraint. So one writes an after trigger that checks whether the constraint has been violated by the DML that fired the trigger. If it has been violated the trigger throws an error, rolling back the transaction. Another approach is to take insert, update, delete rights away from all applications and force them through stored procedures that will error on attempts to violate the constraint. Using a UDF in a check constraint is new to me. The mechanism is similar to the after trigger, a DML statement made a change to the data. After that change, but before the DML statement is done, code is ran in the UDF and the result checked to see if the constraint is violated or not.
The big challenge with adding code that checks the constraint is correctly enforcing the constraint in the face of concurrency. Because transactions are isolated, the code verifying that the constraint is met may not see a conflicting change made by another session. See an example "Concurrency can cause trouble" at the end.
Now, some business rules are too complex to enforce with changes to the schema and code is required. I generally don't consider such business rules to be constraints that the database must enforce and place the code to enforce the business rules in either stored procedures or application code, with the other business rules. (Saying that these business rules are not database constraints does not getting correctness in the face of concurrency any easier.)
Change the schema so SQL Server can enforce the constraint.
A strength reduction from a complex constraint to a simple constraint. There is no one technique in this approach. Sometimes the application of a schema change that will allow SQL Server to enforce a constraint is easy to spot. And sometimes one may need to play with SQL Server features with outside the box thinking.Some schema changes that may be helpful:
Increase the normalization. Many reasonable constraints cannot be expressed against denomalized data.
Adding a persisted computed column to a table and using that column in a constraint.
Indexed views can be amazing for constraint enforcement. (The restrictions placed on what views can be indexed is super frustrating.)
Example using another table
(This may count as a normalization.) Lets consider a table, like in your question, but adding one additional column, a primary key that allows update and delete of individual rows.
CREATE TABLE dbo.T (PK INT PRIMARY KEY IDENTITY(1, 1)
, A CHAR(1) NOT NULL
, B INT NOT NULL
/* other columns as needed */);
And the rule is that the following query should never return a row:
SELECT 'Uh oh!' AS "Uh oh!"
FROM dbo.T
GROUP BY A
HAVING COUNT(DISTINCT B) > 1
To enforce this one can add the following table and constraint:
CREATE TABLE dbo.TConstraint (A CHAR(1) NOT NULL PRIMARY KEY /* Only one B is allowed for any A */
, B INT NOT NULL
, UNIQUE (A, B) /* provide a target for a foreign key from dbo.T */);
ALTER TABLE dbo.T ADD FOREIGN KEY (A, B)
REFERENCES dbo.TConstraint (A, B);
Now the application logic or stored procedures that write to dbo.T will have to change to write to dbo.EnforceComplexConstraintOnT first. Updates to dbo.T (B) are going to be messy because one will need to change the values in dbo.TCnstraint before dbo.T but that can't be done with the old values in dbo.T. So an update becomes:
Delete row from dbo.T
Update dbo.TConstraint
Insert "changed" row into dbo.T.
That update logic is incompatible with an IDENTITY column and incompatible with any foreign key referencing dbo.T. So the contexts where this would work are limited.
Note also that this solution includes adding additional code for insert, update, and possibly delete. That extra code is not enforcing the constraint, but manipulating the data so that into a form such that SQL Server can enforce the constraint.
Example using indexed views
Indexed views come with caveats. See the documentation: https://learn.microsoft.com/en-us/sql/relational-databases/views/create-indexed-views?view=sql-server-ver15
Instead of the new table and constraint:
/* SET OPTIONS required by indexed views*/
SET ANSI_NULLS, ANSI_PADDING, ANSI_WARNINGS
, CONCAT_NULL_YIELDS_NULL, QUOTED_IDENTIFIER ON;
SET NUMERIC_ROUNDABORT OFF;
GO
/* view shows all the distiinct A, B in dbo.T */
CREATE VIEW dbo.TConstraint WITH SCHEMABINDING AS
SELECT A
, B
, COUNT_BIG (*) AS MustHaveForGroupingInAnIndexedView
FROM dbo.T
GROUP BY A, B;
GO
CREATE UNIQUE CLUSTERED INDEX A_B ON dbo.TConstraint (A, B);
CREATE UNIQUE NONCLUSTERED INDEX A ON dbo.TConstraint (A);
INSERT INTO dbo.T (A, B)
VALUES ('a', 100), ('a', 200)
Msg 2601, Level 14, State 1, Line 27
Cannot insert duplicate key row in object 'dbo.TConstraint' with unique index 'A'. The duplicate key value is (a).
Pros: 1) No extra code to get the data into a form SQL Server can constrain. 2) Updates to dbo.T (B) can be done without doing the delete dbo.T update dbo.TConstraint insert dbo.T dance, and all the restrictions that creates. Cons 1) The view has to have schemabinding. 2) Deploying schema changes to dbo.T will involve dropping the indexes on the view, dropping the view, altering dbo.T, create the view, create the indexes on the view. Some deployment tools may struggle with the required steps. Large tables with large indexed views will have a longer deployment for changes to dbo.T.
Concurrency can cause trouble.
#Dai's answer will work in at least some cases, but fails in the following case.
Setup:
Create dbo.TableX, dbo.GetCountInvalidGroupsInTableX(), and CK_Validity as detailed in Dai's answer.
Enable snapshot isolation for the database. ALTER DATABASE CURRENT SET ALLOW_SNAPSHOT_ISOLATION ON
Connect to the database with two separate sessions.
In session 1:
/* Read committed was the default in my connection */
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
BEGIN TRANSACTION;
INSERT INTO TableX VALUES(10, 100);
In session 2:
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
INSERT INTO TableX VALUES(10, 200);
/* (1 row affected) This insert did not block
on the open transaction */
In session 1:
COMMIT;
SELECT * FROM TableX;
/*
A B
10 100
10 200
*/
To add code to enforce a constraint one must know what isolation level may be used and ensure that the DML will block when needed so that when the code checking that the constraint is still satisfied can see everything it needs to.
Summary
Just as SQL Server has different constraint types for different situations there are different techniques for enforcing constraints when using what SQL provides against the current data model is not enough. Prefer schema changes that allows SQL Server to enforce the constraints. SQL Server is far more likely to get concurrency correct that most people writing Transact-SQL.
SQL Server supports UDFs in CHECK constraints.
While CHECK constraints are ostensibly a row-level-only constraint, a CHECK constraint can still use a UDF which checks the entire table, and I believe that then suits your purposes.
Note that CHECK constraints are evaluated for every row in a table (though only on-demand) and SQL Server isn't smart enough to detect redundant constraint evaluations. So be careful - and be sure to monitor and profile your database's performance.
(The inner-query in the function below returns the A and COUNT(DISTINCT x.B) values so you can easily copy+paste the inner-query into a new SSMS tab to see the invalid data. Whereas the function's actual execution-plan will optimize-away those columns because they aren't used by the outer-query, so there's no harm in having them in the inner-query).
CREATE FUNCTION dbo.GetCountInvalidGroupsInTableX()
RETURNS bit
AS
BEGIN
DECLARE #countInvalidGroups int = (
SELECT
COUNT(*)
FROM
(
SELECT
x.A,
COUNT(DISTINCT x.B) AS CountDistinctXB
FROM
dbo.TableX AS x
GROUP BY
x.A
HAVING
COUNT(DISTINCT x.B) >= 2
);
RETURN #countInvalidGroups;
END
Used like so:
CREATE TABLE TableX (
A int NOT NULL,
B int NOT NULL,
CONSTRAINT PK_TableX PRIMARY KEY ( etc )
);
CREATE FUNCTION dbo.GetCountInvalidGroupsInTableX() RETURNS bit
AS
/* ... */
END;
ALTER TABLE TableX
ADD CHECK CONSTRAINT CK_Validity CHECK ( dbo.GetCountInvalidGroupsInTableX() = 0 );

Prohibit users from updating a column if another column is null?

I have mytable which has 3 integer fields: id, status, project_id.
I've told people that they should not progress status past 4 before assigning it a project_id value. Naturally people don't listen and then there are problems down the road.
Is there a way to return an error if someone tries to update from status 4 to 5 while project_id column is null? I still need people to be able to update status from 2 or 3 to status 4 regardless of it having a project_id.
You can use CHECK constraint as suggested by #stickbit if you need very simple checks.
If you need a more complicated logic, you can use TRIGGER functionality
CREATE FUNCTION check_status()
RETURNS trigger AS
$mytrigger$
BEGIN
IF OLD.status = 4 AND NEW.status >= 5 AND NEW.project_id IS NULL THEN
RAISE EXCEPTION 'Project ID must be assigned before progressing to status 5';
END IF;
RETURN NEW;
END
$mytrigger$
LANGUAGE plpgsql;
CREATE TRIGGER project_id_check
BEFORE UPDATE ON "MyTable"
FOR EACH ROW EXECUTE PROCEDURE check_status();
How about a check constraint on the table:
CHECK (project_id IS NOT NULL OR status < 5)
If you have data violating your desired rules, you can still use a CHECK constraint like demonstrated by sticky bit. Just make it NOT VALID:
ALTER TABLE mytable ADD CONSTRAINT project_id_required_for_status_5_or_higher
CHECK project_id IS NOT NULL OR status < 5) NOT VALID;
Then the constraint is only applied to subsequent inserts and updates. Existing rows are ignored. (But any new update must fix violating values or it will fail.)
You should also have a FOREIGN KEY constraint enforcing referential integrity for project_id, else the constraint can easily be circumvented with dummy values.
Fine point: a CHECK constraint not only prohibits the updates you mentioned, but inserts with a violating state as well.
Once all rows are fixed to comply with the new rule, you can VALIDATE the constraint:
ALTER TABLE mytable VALIDATE CONSTRAINT project_id_required_for_status_5_or_higher;
More:
Trigger vs. check constraint

SQL constraint to check whether value doesn't exist in another table

In my PostgreSQL 9.4 database, I have a table fields with a column name with unique values.
I'm creating a new table fields_new with a similar structure (not important here) and a column name as well. I need a way to constraint name values to be inserted to the fields_new not to be present in fields.name.
For example, if fields.name contains the values 'color' and 'length', I need to prevent fields_new.name from containing 'color' or 'length' values. So, in other words I need to provide that the name columns in both tables do not have any duplicate values between them. And the constraint should go both ways.
Only enforce constraint for new entries in fields_new
CHECK constraints are supposed to be immutable, which generally rules out any kind of reference to other tables, which are not immutable by nature.
To allow some leeway (especially with temporal functions) STABLE functions are tolerated. Obviously, this cannot be completely reliable in a database with concurrent write access. If rows in the referenced table change, they may be in violation of the constraint.
Declare the invalid nature of your constraint by making it NOT VALID (Postgres 9.1+). This way Postgres also won't try to enforce it during a restore (which might be bound to fail). Details here:
Disable all constraints and table checks while restoring a dump
The constraint is only enforced for new rows.
CREATE OR REPLACE FUNCTION f_fields_name_free(_name text)
RETURNS bool AS
$func$
SELECT NOT EXISTS (SELECT 1 FROM fields WHERE name = $1);
$func$ LANGUAGE sql STABLE;
ALTER TABLE fields_new ADD CONSTRAINT fields_new_name_not_in_fields
CHECK (f_fields_name_free(name)) NOT VALID;
Plus, of course, a UNIQUE or PRIMARY KEY constraint on fields_new(name) as well as on fields(name).
Related:
CONSTRAINT to check values from a remotely related table (via join etc.)
Function to update a status flag for validity of other column?
Trigger vs. check constraint
Enforce both ways
You could go one step further and mirror the above CHECK constraint on the 2nd table. Still no guarantees against nasty race conditions when two transactions write to both tables at the same time.
Or you could maintain a "materialized view" manually with triggers: a union of both name columns. Add a UNIQUE constraint there. Not as rock solid as the same constraint on a single table: there might be race conditions for writes to both tables at the same time. But the worst that can happen is a deadlock forcing transactions to be rolled back. No permanent violation can creep in if all write operations are cascaded to the "materialized view".
Similar to the "dark side" in this related answer:
Can PostgreSQL have a uniqueness constraint on array elements?
Just that you need triggers for INSERT / UPDATE / DELETE on both tables.
I had a similar problem where I wanted to maintain a list of items per-company, along with a global list for all companies. If the company number is 0, it is to be treated as global and a new item cannot be inserted for ANY company using that name. The following script (based on the above solution) seems to work:
drop table if exists blech;
CREATE TABLE blech (
company int,
name_key text,
unique (company, name_key)
);
create or replace function f_foobar(new_company int, new_name_key text) returns bool as
$func$
select not exists (
select 1 from blech b
where $1 <> 0
and b.company = 0
and b.name_key = $2);
$func$ language sql stable;
alter table blech add constraint global_unique_name_key
check (f_foobar(company, name_key)) not valid;
insert into blech values(0,'GLOB1');
insert into blech values(0,'GLOB2');
-- should succeed:
insert into blech values(1,'LOCAL1');
insert into blech values(2,'LOCAL1');
-- should fail:
insert into blech values(1,'GLOB1');
-- should fail:
insert into blech values(0,'GLOB1');

trigger execution against condition satisfaction

I have created this trigger which should give a error, whenever the value of new rctmemenrolno of table-receipts1 is matched with the memenrolno of table- memmast, but it is giving error in both condition(it is matched or not matched).
kindly help me.
CREATE OR REPLACE TRIGGER HDD_CABLE.trg_rctenrolno
before insert ON HDD_CABLE.RECEIPTS1 for each row
declare
v_enrolno varchar2(9);
cursor c1 is select memenrolno from memmast;
begin
open c1;
fetch c1 into v_enrolno;
LOOP
If :new.rctmemenrolno<>v_enrolno
then
raise_application_error(-20186,'PLEASE ENTER CORRECT ENROLLMENT NO');
close c1;
end if;
END LOOP;
end;
You are validating whether the entered RECEIPTS1.rctmemenrolno matches a memenrolno in MEMAST, right? Well a trigger is the wrong way to do this.
The loop completely won't scale (the more rows in MEMAST the longer it will take to insert a record in RECEIPTS1). But even a direct lookup to check for the existence of the specified key will still suck. Also this approach fails to work safely in a multi-user environment because Oracle uses the READ COMMITTED isolation level. In other words while your transaction is making the check some other session is deleting the row you have just found. Which results in a corrupt database.
The one and only correct way to do this is with a foreign key constraint.
alter table receipt1
add constraint receipts1_memast_fk foreign key (rctmemenrolno)
references memast (memenrolno);
Of course, this presumes you have a primary key - or a unique constraint - on memast to enforce the uniquess of memenrolno. I fervently hope you are not trying to enforce its uniqueness through triggers.
edit
"already I have 1 primary key as MEMID on memmast table"
If there is a one-to-one relationship between MEMID and MEMENROLNO, which is common when we have a business key and a surrogate key, then the proper solution is to use the primary key on RECEIPTS1. That is, drop the column RCTMEMENROL and replace it with RCTMEMID. Then plug those column names into that alter table statement. The front end would be responsible for providing the users with a facility to lookup the MEMID for a given MEMENROLNO, such as a List Of Values widget.
A yuckier solution would be to build a unique constraint on MEMAST.MEMENROLNO. That is not advisable because using natural keys to enforce foreign keys is a bad idea, and utterly mad when the table in question already has a synthetic primary key.
If there isn't a one-to-one relationship between MEMID and MEMENROLNO then I don't know what the purpose of the check is. A query on MEMAST can assert the existence of a given MEMENROLNO now but without a foreign key it can say nothing about the state of the database in five minutes time. So why bother with the check at all?
It looks like you're looping through all possible values of memenrolno and making sure the new value matches every possible value, which will always be false when there is more than one possible memenrolno.
Try this instead.
CREATE OR REPLACE TRIGGER HDD_CABLE.trg_rctenrolno
before insert ON HDD_CABLE.RECEIPTS1 for each row
begin
If NOT EXISTS SELECT 1 FROM memast WHERE memenrolno = :new.rctmemenrolno
then
raise_application_error(-20186,'PLEASE ENTER CORRECT ENROLLMENT NO');
end if;
end;

What's the best way to have a "repeating field" in SQL?

I'm trying to set up a table that links two records from a different table. These links themselves need to be correlated to another table. So at the moment, my table looks like this:
link_id (primary key)
item_id_1 (foreign key)
item_id_2 (foreign key)
link_type (metadata)
However, the links between items are not directional (i.e. it should make no difference whether an item is the first or second listed in a link). Ideally, I'd like for the item_id field to just appear twice; as it is I'll have to be careful to always be checking for duplicates to make sure that there's never a record created linking 12 to 14 if 14 to 12 already exists.
Is there an elegant database design solution to this, or should I just adopt a convention (e.g. id_1 is always the smaller id number) and police duplication within the application?
Thanks in advance!
You could use a join table.
Table1: link_id (PK), link_type
JoinTable: table1_link_id, item_id (composite primary key composed of both ids)
Benzado already pointed it out - add a constraint that enforces that item_id_1 < item_id2:
ALTER TABLE t ADD CONSTRAINT CH_ITEM1_LESSTHAN_ITEM2 CHECK (item_id_1 < item_id_2)
So this will prevent the wrong data from being entered, rejecting such updates/inserts.
If you want to automatically correct it any situation where item_id_1 > item_id_2, you could add a trigger instead (technically you could have both, but then you might have some hassle getting it to work right, as check constraints could be checked before the trigger fires). Exact syntax for triggers depends on you RDBMS.
There's a couple of ways to implement this. One would be with a combination of a check constraint and a unique constraint.
alter table t23
add constraint c1_c2_ck check (c1 < c2)
/
alter table t23
add constraint t23_uk unique (c1, c2)
/
That would work in most DBMS flavours. An alternative approach, which would work in Oracle at least, would be to use a function-based index ....
create unique index t23_uidx on t23
(least(c1,c2), greatest(c1,c2))
/