SQL constraint to check whether value doesn't exist in another table - sql

In my PostgreSQL 9.4 database, I have a table fields with a column name with unique values.
I'm creating a new table fields_new with a similar structure (not important here) and a column name as well. I need a way to constraint name values to be inserted to the fields_new not to be present in fields.name.
For example, if fields.name contains the values 'color' and 'length', I need to prevent fields_new.name from containing 'color' or 'length' values. So, in other words I need to provide that the name columns in both tables do not have any duplicate values between them. And the constraint should go both ways.

Only enforce constraint for new entries in fields_new
CHECK constraints are supposed to be immutable, which generally rules out any kind of reference to other tables, which are not immutable by nature.
To allow some leeway (especially with temporal functions) STABLE functions are tolerated. Obviously, this cannot be completely reliable in a database with concurrent write access. If rows in the referenced table change, they may be in violation of the constraint.
Declare the invalid nature of your constraint by making it NOT VALID (Postgres 9.1+). This way Postgres also won't try to enforce it during a restore (which might be bound to fail). Details here:
Disable all constraints and table checks while restoring a dump
The constraint is only enforced for new rows.
CREATE OR REPLACE FUNCTION f_fields_name_free(_name text)
RETURNS bool AS
$func$
SELECT NOT EXISTS (SELECT 1 FROM fields WHERE name = $1);
$func$ LANGUAGE sql STABLE;
ALTER TABLE fields_new ADD CONSTRAINT fields_new_name_not_in_fields
CHECK (f_fields_name_free(name)) NOT VALID;
Plus, of course, a UNIQUE or PRIMARY KEY constraint on fields_new(name) as well as on fields(name).
Related:
CONSTRAINT to check values from a remotely related table (via join etc.)
Function to update a status flag for validity of other column?
Trigger vs. check constraint
Enforce both ways
You could go one step further and mirror the above CHECK constraint on the 2nd table. Still no guarantees against nasty race conditions when two transactions write to both tables at the same time.
Or you could maintain a "materialized view" manually with triggers: a union of both name columns. Add a UNIQUE constraint there. Not as rock solid as the same constraint on a single table: there might be race conditions for writes to both tables at the same time. But the worst that can happen is a deadlock forcing transactions to be rolled back. No permanent violation can creep in if all write operations are cascaded to the "materialized view".
Similar to the "dark side" in this related answer:
Can PostgreSQL have a uniqueness constraint on array elements?
Just that you need triggers for INSERT / UPDATE / DELETE on both tables.

I had a similar problem where I wanted to maintain a list of items per-company, along with a global list for all companies. If the company number is 0, it is to be treated as global and a new item cannot be inserted for ANY company using that name. The following script (based on the above solution) seems to work:
drop table if exists blech;
CREATE TABLE blech (
company int,
name_key text,
unique (company, name_key)
);
create or replace function f_foobar(new_company int, new_name_key text) returns bool as
$func$
select not exists (
select 1 from blech b
where $1 <> 0
and b.company = 0
and b.name_key = $2);
$func$ language sql stable;
alter table blech add constraint global_unique_name_key
check (f_foobar(company, name_key)) not valid;
insert into blech values(0,'GLOB1');
insert into blech values(0,'GLOB2');
-- should succeed:
insert into blech values(1,'LOCAL1');
insert into blech values(2,'LOCAL1');
-- should fail:
insert into blech values(1,'GLOB1');
-- should fail:
insert into blech values(0,'GLOB1');

Related

How do you add complex constraints on a table? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
In SQL Server (or it could be other SQL language I guess), how do you make sure that a table always verifies some complex constraint.
For instance, imagine a table has only column A and B
The constraint I want to enforce, is that all lines that contain the same value for column A (e.g. 'a'), also have the same value for column B (e.g. 'b'). For instance, this isn't allowed:
A
B
C
a1
b
c1
a1
c
c2
a2
d
c3
But this is allowed:
A
B
C
a1
b
c1
a1
b
c2
a2
d
c3
This is an example of constraint, but there could be more complex (i.e. depends on more columns etc). Here I think that this constraint could be expressed in terms of primary key etc, but let's assume it can't.
What is the way to enforce this?
What I can think about is, each time I want to do an insert, I instead go through a stored procedure.
The stored procedure would do an atomic transaction and check if the table would verify the constraint if updated, and "reject" the update if the constraint would be violated, but this seems quite complex.
Rough definitions, for this answer:
Simple constraint. A constraint that can be, without schema changes beyond the constraint definition, enforced by the straight forward use of one of SQL Server constraints: primary key, unique, foreign key, not null and check against a simple scalar expression. (I would include a unique filtered index as a simple constraint.)
Complex constraint. A constraint that can not be so enforced.
Just as SQL server provides different constraints to handle different situations there are different techniques for enforcing more complex constraints. Which technique is appropriate depends on what one is trying to accomplish and the context one is in.
Broadly, there are two approaches to enforcing complex constraints, either add code to that prevents bad data or change the schema in such a way that SQL Server can enforce the constraint with a simple constraint.
Add code to enforce the constraint.
A common refrain is to use a trigger to enforce a constraint. So one writes an after trigger that checks whether the constraint has been violated by the DML that fired the trigger. If it has been violated the trigger throws an error, rolling back the transaction. Another approach is to take insert, update, delete rights away from all applications and force them through stored procedures that will error on attempts to violate the constraint. Using a UDF in a check constraint is new to me. The mechanism is similar to the after trigger, a DML statement made a change to the data. After that change, but before the DML statement is done, code is ran in the UDF and the result checked to see if the constraint is violated or not.
The big challenge with adding code that checks the constraint is correctly enforcing the constraint in the face of concurrency. Because transactions are isolated, the code verifying that the constraint is met may not see a conflicting change made by another session. See an example "Concurrency can cause trouble" at the end.
Now, some business rules are too complex to enforce with changes to the schema and code is required. I generally don't consider such business rules to be constraints that the database must enforce and place the code to enforce the business rules in either stored procedures or application code, with the other business rules. (Saying that these business rules are not database constraints does not getting correctness in the face of concurrency any easier.)
Change the schema so SQL Server can enforce the constraint.
A strength reduction from a complex constraint to a simple constraint. There is no one technique in this approach. Sometimes the application of a schema change that will allow SQL Server to enforce a constraint is easy to spot. And sometimes one may need to play with SQL Server features with outside the box thinking.Some schema changes that may be helpful:
Increase the normalization. Many reasonable constraints cannot be expressed against denomalized data.
Adding a persisted computed column to a table and using that column in a constraint.
Indexed views can be amazing for constraint enforcement. (The restrictions placed on what views can be indexed is super frustrating.)
Example using another table
(This may count as a normalization.) Lets consider a table, like in your question, but adding one additional column, a primary key that allows update and delete of individual rows.
CREATE TABLE dbo.T (PK INT PRIMARY KEY IDENTITY(1, 1)
, A CHAR(1) NOT NULL
, B INT NOT NULL
/* other columns as needed */);
And the rule is that the following query should never return a row:
SELECT 'Uh oh!' AS "Uh oh!"
FROM dbo.T
GROUP BY A
HAVING COUNT(DISTINCT B) > 1
To enforce this one can add the following table and constraint:
CREATE TABLE dbo.TConstraint (A CHAR(1) NOT NULL PRIMARY KEY /* Only one B is allowed for any A */
, B INT NOT NULL
, UNIQUE (A, B) /* provide a target for a foreign key from dbo.T */);
ALTER TABLE dbo.T ADD FOREIGN KEY (A, B)
REFERENCES dbo.TConstraint (A, B);
Now the application logic or stored procedures that write to dbo.T will have to change to write to dbo.EnforceComplexConstraintOnT first. Updates to dbo.T (B) are going to be messy because one will need to change the values in dbo.TCnstraint before dbo.T but that can't be done with the old values in dbo.T. So an update becomes:
Delete row from dbo.T
Update dbo.TConstraint
Insert "changed" row into dbo.T.
That update logic is incompatible with an IDENTITY column and incompatible with any foreign key referencing dbo.T. So the contexts where this would work are limited.
Note also that this solution includes adding additional code for insert, update, and possibly delete. That extra code is not enforcing the constraint, but manipulating the data so that into a form such that SQL Server can enforce the constraint.
Example using indexed views
Indexed views come with caveats. See the documentation: https://learn.microsoft.com/en-us/sql/relational-databases/views/create-indexed-views?view=sql-server-ver15
Instead of the new table and constraint:
/* SET OPTIONS required by indexed views*/
SET ANSI_NULLS, ANSI_PADDING, ANSI_WARNINGS
, CONCAT_NULL_YIELDS_NULL, QUOTED_IDENTIFIER ON;
SET NUMERIC_ROUNDABORT OFF;
GO
/* view shows all the distiinct A, B in dbo.T */
CREATE VIEW dbo.TConstraint WITH SCHEMABINDING AS
SELECT A
, B
, COUNT_BIG (*) AS MustHaveForGroupingInAnIndexedView
FROM dbo.T
GROUP BY A, B;
GO
CREATE UNIQUE CLUSTERED INDEX A_B ON dbo.TConstraint (A, B);
CREATE UNIQUE NONCLUSTERED INDEX A ON dbo.TConstraint (A);
INSERT INTO dbo.T (A, B)
VALUES ('a', 100), ('a', 200)
Msg 2601, Level 14, State 1, Line 27
Cannot insert duplicate key row in object 'dbo.TConstraint' with unique index 'A'. The duplicate key value is (a).
Pros: 1) No extra code to get the data into a form SQL Server can constrain. 2) Updates to dbo.T (B) can be done without doing the delete dbo.T update dbo.TConstraint insert dbo.T dance, and all the restrictions that creates. Cons 1) The view has to have schemabinding. 2) Deploying schema changes to dbo.T will involve dropping the indexes on the view, dropping the view, altering dbo.T, create the view, create the indexes on the view. Some deployment tools may struggle with the required steps. Large tables with large indexed views will have a longer deployment for changes to dbo.T.
Concurrency can cause trouble.
#Dai's answer will work in at least some cases, but fails in the following case.
Setup:
Create dbo.TableX, dbo.GetCountInvalidGroupsInTableX(), and CK_Validity as detailed in Dai's answer.
Enable snapshot isolation for the database. ALTER DATABASE CURRENT SET ALLOW_SNAPSHOT_ISOLATION ON
Connect to the database with two separate sessions.
In session 1:
/* Read committed was the default in my connection */
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
BEGIN TRANSACTION;
INSERT INTO TableX VALUES(10, 100);
In session 2:
SET TRANSACTION ISOLATION LEVEL SNAPSHOT
INSERT INTO TableX VALUES(10, 200);
/* (1 row affected) This insert did not block
on the open transaction */
In session 1:
COMMIT;
SELECT * FROM TableX;
/*
A B
10 100
10 200
*/
To add code to enforce a constraint one must know what isolation level may be used and ensure that the DML will block when needed so that when the code checking that the constraint is still satisfied can see everything it needs to.
Summary
Just as SQL Server has different constraint types for different situations there are different techniques for enforcing constraints when using what SQL provides against the current data model is not enough. Prefer schema changes that allows SQL Server to enforce the constraints. SQL Server is far more likely to get concurrency correct that most people writing Transact-SQL.
SQL Server supports UDFs in CHECK constraints.
While CHECK constraints are ostensibly a row-level-only constraint, a CHECK constraint can still use a UDF which checks the entire table, and I believe that then suits your purposes.
Note that CHECK constraints are evaluated for every row in a table (though only on-demand) and SQL Server isn't smart enough to detect redundant constraint evaluations. So be careful - and be sure to monitor and profile your database's performance.
(The inner-query in the function below returns the A and COUNT(DISTINCT x.B) values so you can easily copy+paste the inner-query into a new SSMS tab to see the invalid data. Whereas the function's actual execution-plan will optimize-away those columns because they aren't used by the outer-query, so there's no harm in having them in the inner-query).
CREATE FUNCTION dbo.GetCountInvalidGroupsInTableX()
RETURNS bit
AS
BEGIN
DECLARE #countInvalidGroups int = (
SELECT
COUNT(*)
FROM
(
SELECT
x.A,
COUNT(DISTINCT x.B) AS CountDistinctXB
FROM
dbo.TableX AS x
GROUP BY
x.A
HAVING
COUNT(DISTINCT x.B) >= 2
);
RETURN #countInvalidGroups;
END
Used like so:
CREATE TABLE TableX (
A int NOT NULL,
B int NOT NULL,
CONSTRAINT PK_TableX PRIMARY KEY ( etc )
);
CREATE FUNCTION dbo.GetCountInvalidGroupsInTableX() RETURNS bit
AS
/* ... */
END;
ALTER TABLE TableX
ADD CHECK CONSTRAINT CK_Validity CHECK ( dbo.GetCountInvalidGroupsInTableX() = 0 );

Unique index by another column for postgresql

For example, I have this table (postgresql):
CREATE TABLE t(
a TEXT,
b TEXT
);
CREATE UNIQUE INDEX t_a_uniq_idx ON t(a);
I want to create unique constraint/index for b and a columns. But not simple ADD CONSTRAINT t_ab UNIQUE (a, b). I want unique b by a:
INSERT INTO t(a,b) VALUES('123', null); -- this is ok
INSERT INTO t(a,b) VALUES('456', '123'); -- this is not ok, because duplicate '123'
How can I do this?
Edit:
Why do I need this? For example, If I have users table and I want to create email changing feature, I need structure like this:
CREATE TABLE users(
email TEXT,
unconfirmed_email TEXT
-- some other data
);
CREATE UNIQUE INDEX unq_users_email_idx ON users(email);
User can set value into unconfirmed_email column, but only if this value don't used in email column.
If uniqueness is needed across both columns, I think you have the wrong data model. Instead of storing pairs on a single row, you should have two tables:
create table pairs (
pairid int generated always as identity,
. . . -- more information about the pair, if needed
);
create table pairElements (
pairElementId int generated always as identity,
pairId int references pairs(pairid),
which int check (which in (1, 2)),
value text,
unique (pairid, which)
);
Then the condition is simple:
create constraint unq_pairelements_value unique pairelements(value);
While this leads to an interesting problem, I agree that the data could be modelled better - specifically, the column unconfirmed_email can be seen as combining two attributes: an association between an address and a user, which it shares with the email column; and a status of whether that address is confirmed, which is dependant on the combination of user and address, not on one or the other.
This implies that a new table should be extracted of user_email_addresses:
user_id - foreign key to users
email - non-nullable
is_confirmed boolean
Interestingly, as often turns out to be the case, this extracted table has natural data that could be added:
When was the address added?
When was it confirmed?
What is the verification code sent to the user?
If the user is allowed multiple addresses, which is the primary, or which is to be used for a particular purpose?
We can now model various constraints on this table (using unique indexes in some cases because you can't specify Where on a unique constraint):
Each user can only have one association (whether confirmed or not) with a particular e-mail address: Constraint Unique ( user_id, email )
An e-mail address can only be confirmed for one user: Unique Index On user_emails ( email ) Where is_confirmed Is True;
Each user can have only one confirmed address: Unique Index On user_emails ( user_id ) Where is_confirmed Is True;. You might want to adjust this to allow users to confirm multiple addresses, but have a single "primary" address.
Each user can have only one unconfirmed address: Unique Index On user_emails ( user_id ) Where is_confirmed Is False;. This is implied in your current design, but may not actually be necessary.
This leaves us with a re-worded version of your original problem: how do we forbid unconfirmed rows with the same email as a confirmed row, but allow multiple identical unconfirmed rows.
One approach would be to use an Exclude constraint for rows where email matches, but is_confirmed doesn't match. The cast to int is necessary because creating the gist index on boolean fails.
Alter Table user_emails
Add Constraint unconfirmed_must_not_match_confirmed
Exclude Using gist (
email With =,
Cast(is_confirmed as Int) With <>
);
On its own, this would allow multiple copies of email, as long as they all had the same value of is_confirmed. But since we've already constrained against multiple rows where is_confirmed Is True, the only duplicates remaining will be where is_confirmed Is False on all matching rows.
Here is a db<>fiddle demonstrating the above design: https://dbfiddle.uk/?rdbms=postgres_12&fiddle=fd8e4e6a4cce79d9bc6bf07111e68df9
As I understand it, you want a UNIQUE index over a and b combined.
You update narrowed it down (b shall not exist in a). This solution is stricter.
Solution
After trying and investigating quite a bit (see below!) I came up with this:
ALTER TABLE tbl ADD CONSTRAINT a_not_equal_b CHECK (a <> b);
ALTER TABLE tbl ADD CONSTRAINT ab_unique
EXCLUDE USING gist ((ARRAY[hashtext(COALESCE(a, ''))
, hashtext(COALESCE(b, ''))]) gist__int_ops WITH &&);
db<>fiddle here
Since the exclusion constraint won't currently (pg 12) work with text[], I work with int4[] of hash values. hashtext() is the built-in hash function that's also used for hash-partitioning (among other uses). Seems perfect for the job.
The operator class gist__int_ops is provided by the additional module intarray, which has to be installed once per database. Its optional, the solution works with the default array operator class as well. Just drop gist__int_ops to fall back. But intarray is faster. Related:
How to create an index for elements of an array in PostgreSQL?
Caveats
int4 may not be big enough to rule out hash collisions sufficiently. You might want to go with bigint instead. But that's more expensive and can't use the gist__int_ops operator class to improve performance. Your call.
Unicode has the dismal property that equal strings can be encoded in different ways. If you work with Unicode (typical encoding UTF8) and use non-ASCII characters (and this matters to you), compare normalized forms to rule out such duplicates. The upcoming Postgres 13 adds the function normalize() for that purpose. This is a general caveat of character type duplicates, though, not specific to my solution.
NULL value is allowed, but collides with empty string (''). I would rather go with NOT NULL columns and drop COALESCE() from the expressions.
Obstacle course to an exclusion constraint
My first thought was: exclusion constraint. But it falls through:
ALTER TABLE tbl ADD CONSTRAINT ab_unique EXCLUDE USING gist ((ARRAY[a,b]) WITH &&);
ERROR: data type text[] has no default operator class for access method "gist"
HINT: You must specify an operator class for the index or define a default operator class for the data type.
There is an open TODO item for this. Related:
ERROR: data type text[] has no default operator class for access method “gist”
But can't we use a GIN index for text[]? Alas, no:
ALTER TABLE tbl ADD CONSTRAINT ab_unique EXCLUDE USING gin ((ARRAY[a,b]) WITH &&);
ERROR: access method "gin" does not support exclusion constraints
Why? The manual:
The access method must support amgettuple (see Chapter 61); at present this means GIN cannot be used.
It seems hard to implement, so don't hold your breath.
If a and b were integer columns, we could make it work with an integer array:
ALTER TABLE tbl ADD CONSTRAINT ab_unique EXCLUDE USING gist ((ARRAY[a,b]) WITH &&);
Or with the gist__int_ops operator class from the additional module intarray (typically faster):
ALTER TABLE tbl ADD CONSTRAINT ab_unique EXCLUDE USING gist ((ARRAY[a,b]) gist__int_ops WITH &&);
To also forbid duplicates within the same row, add a CHECK constraint:
ALTER TABLE tbl ADD CONSTRAINT a_not_equal_b CHECK (a <> b);
Remaining issue: Does work with NULL values.
Workaround
Add a helper table to store values from a and b in one column:
CREATE TABLE tbl_ab(ab text PRIMARY KEY);
Main table, like you had it, plus FK constraints.
CREATE TABLE tbl (
a text REFERENCES tbl_ab ON UPDATE CASCADE ON DELETE CASCADE
, b text REFERENCES tbl_ab ON UPDATE CASCADE ON DELETE CASCADE
);
Use a function like this to INSERT:
CREATE OR REPLACE FUNCTION f_tbl_insert(_a text, _b text)
RETURNS void
LANGUAGE sql AS
$func$
WITH ins_ab AS (
INSERT INTO tbl_ab(ab)
SELECT _a WHERE _a IS NOT NULL -- NULL is allowed (?)
UNION ALL
SELECT _b WHERE _b IS NOT NULL
)
INSERT INTO tbl(a,b)
VALUES (_a, _b);
$func$;
db<>fiddle here
Or implement a trigger to take care of it in the background.
CREATE OR REPLACE FUNCTION trg_tbl_insbef()
RETURNS trigger AS
$func$
BEGIN
INSERT INTO tbl_ab(ab)
SELECT NEW.a WHERE NEW.a IS NOT NULL -- NULL is allowed (?)
UNION ALL
SELECT NEW.b WHERE NEW.b IS NOT NULL;
RETURN NEW;
END
$func$ LANGUAGE plpgsql;
CREATE TRIGGER tbl_insbef
BEFORE INSERT ON tbl
FOR EACH ROW EXECUTE PROCEDURE trg_tbl_insbef();
db<>fiddle here
NULL handling can be changed as desired.
Either way, while the added (optional) FK constraints enforce that we can't sidestep the helper table tbl_ab, and allow to UPDATE and DELETE in tbl_ab to cascade, you still need to project UPDATE and DELETE into the helper table as well (or implement more triggers). Tricky corner cases, but there are solutions. Not going into this, after I found the solution above with an exclusion constraint using hashtext() ...
Related:
Can PostgreSQL have a uniqueness constraint on array elements?

Create a table with a foreign key referencing to a temporary table generated by a query

I need to create a table having a field, which is a foreign key referencing to another query rather than existing table. E.g. the following statement is correct:
CREATE TABLE T1 (ID1 varchar(255) references Types)
but this one throws a syntax error:
CREATE TABLE T2 (ID2 varchar(255) references SELECT ID FROM BaseTypes UNION SELECT ID FROM Types)
I cannot figure out how I can achieve my goal. In the case it’s needed to introduce a temporary table, how can I force this table being updated each time when tables BaseTypes and Types are changed?
I am using Firebird DB and IBExpert management tool.
A foreign key constraint (references) can only reference a table (or more specifically columns in the primary or unique key of a table). You can't use it to reference a select.
If you want to do that, you need to use a CHECK constraint, but that constraint would only be checked on insert and updates: it wouldn't prevent other changes (eg to the tables in your select) from making the constraint invalid while the data is at rest. This means that at insert time the value could meet the constraint, but the constraint could - unnoticed! - become invalid. You would only notice this when updating the row.
An example of the CHECK-constraint could be:
CREATE TABLE T2 (
ID2 varchar(255) check (exists(
SELECT ID FROM BaseTypes WHERE BaseTypes.ID = ID2
UNION
SELECT ID FROM Types WHERE Types.ID = ID2))
)
For a working example, see this fiddle.
Alternatively, if your goal is to 'unite' two tables, define a 'super'-table that contains the primary keys of both tables, and reference that table from the foreign key constraint. You could populate and update (eg insert and delete) this table using triggers. Or you could use a single table, and replace the existing views with an updatable view (if this is possible depends on the exact data, eg IDs shouldn't overlap).
This is more complex, but would give you the benefit that the foreign key is also enforced 'at rest'.

Oracle Unique Constraint - Trigger to check value of property in new relation

Hi I'm having trouble getting my sql syntax correct. I want to create a unique constraint that looks at the newly added foreign key, looks at some properties of the newly related entity to decided if the relationship is allowed.
CREATE or replace TRIGGER "New_Trigger"
AFTER INSERT OR UPDATE ON "Table_1"
FOR EACH ROW
BEGIN
Select "Table_2"."number"
(CASE "Table_2"."number" > 0
THEN RAISE_APPLICATION_ERROR(-20000, 'this is not allowed');
END)
from "Table_1"
WHERE "Table_2"."ID" = :new.FK_Table_2_ID
END;
Edit: APC answer is wonderfully comprehensive, however leads me to think im doing it in the wrong way.
The situation is I have a table of people with different privilege levels, and I want to check these privilege levels, e.g. A user, 'Bob', has low level privileges and he tries to become head of department which requires requires high privileges so the system prevents this happening.
There is a follow-up question which poses a related scenario but with a different data model. Find it here.
So the rule you want to enforce is that TABLE_1 can only reference TABLE_2 if some column in TABLE_2 is zero or less. Hmmm.... Let's sort out the trigger logic and then we'll discuss the rule.
The trigger should look like this:
CREATE or replace TRIGGER "New_Trigger"
AFTER INSERT OR UPDATE ON "Table_1"
FOR EACH ROW
declare
n "Table_2"."number".type%;
BEGIN
Select "Table_2"."number"
into n
from "Table_2"
WHERE "Table_2"."ID" = :new.FK_Table_2_ID;
if n > 0
THEN RAISE_APPLICATION_ERROR(-20000, 'this is not allowed');
end if;
END;
Note that your error message should include some helpful information such as the value of the TABLE_1 primary key, for when you are inserting or updating multiple rows on the table.
What you are trying to do here is to enforce a type of constraint known as an ASSERTION. Assertions are specified in the ANSI standard but Oracle has not implemented them. Nor has any other RDBMS, come to that.
Assertions are problematic because they are symmetrical. That is, the rule also needs to be enforced on TABLE_2. At the moment you check the rule when a record is created in TABLE_1. Suppose at some later time a user updates TABLE_2.NUMBER so it is greater than zero: your rule is now broken, but you won't know that it is broken until somebody issues a completely unrelated UPDATE on TABLE_1, which will then fail. Yuck.
So, what to do?
If the rule is actually
TABLE_1 can only reference TABLE_2 if
TABLE_2.NUMBER is zero
then you can enforce it without triggers.
Add a UNIQUE constraint on TABLE_2 for (ID, NUMBER); you need an additional constraint because ID remains the primary key for TABLE_2.
Add a dummy column on TABLE_1 called TABLE_2_NUMBER. Default it to zero and have a check constraint to ensure it is always zero. (If you are on 11g you should consider using a virtual column for this.)
Change the foreign key on TABLE_1 so (FK_Table_2_ID, TABLE_2_NUMBER) references the unique constraint rather than TABLE_2's primary key.
Drop the "New_Trigger" trigger; you don't need it anymore as the foreign key will prevent anybody updating TABLE_2.NUMBER to a value other than zero.
But if the rule is really as I formulated it at the top i.e.
TABLE_1 can only reference TABLE_2 if
TABLE_2.NUMBER is not greater than zero (i.e. negative values are okay)
then you need another trigger, this time on TABLE_2, to enforce it the other side of the rule.
CREATE or replace TRIGGER "Assertion_Trigger"
BEFORE UPDATE of "number" ON "Table_2"
FOR EACH ROW
declare
x pls_integer;
BEGIN
if :new."number" > 0
then
begin
Select 1
into x
from "Table_1"
WHERE "Table_1"."FK_Table_2_ID" = :new.ID
and rownum = 1;
RAISE_APPLICATION_ERROR(-20001, :new.ID
||' has dependent records in Table_1');
exception
when no_data_found then
null; -- this is what we want
end;
END;
This trigger will not allow you to update TABLE_2.NUMBER to a value greater than zero if it is referenced by records in TABLE_2. It only fires if the UPDATE statement touches TABLE_2.NUMBER to minimise the performance impact of executing the lookup.
Don't use a trigger to create a unique constraint or a foreign key constraint. Oracle has declarative support for unique and foreign keys, e.g.:
Add a unique constraint on a column:
ALTER TABLE "Table_1" ADD (
CONSTRAINT table_1_uk UNIQUE (column_name)
);
Add a foreign key relationship:
ALTER TABLE "ChildTable" ADD (
CONSTRAINT my_fk FOREIGN KEY (parent_id)
REFERENCES "ParentTable" (id)
);
I'm not clear on exactly what you're trying to achieve with your trigger - it's a bit of a mess of SQL and PL/SQL munged together which will not work, and seems to refer to a column on "Table_2" which is not actually queried.
A good rule of thumb is, if your trigger is querying the same table that the trigger is on, it's probably wrong.
I'm not sure, but are you after some kind of conditional foreign key relationship? i.e. "only allow child rows where the parent satisfies condition x"? If so, the problem is in the data model and should be fixed there. If you provide more explanation of what you're trying to achieve we should be able to help you.

MySQL: Can I constraint column values in one table to values in a column in another table, by DB design only?

Example:
Table "persons", Column "surname" may only contain values predefined in
Table "names", Column "surnames", which would contain a collection of surnames acceptable for the purpose.
Can I achieve this by design (i.e. without involving any validation code)? On a MyISAM table? No? On InnoDB?
Thank you.
What you're asking for is a foreign key constraint. You'd need to use InnoDB - quote:
For storage engines other than InnoDB, MySQL Server parses the FOREIGN KEY syntax in CREATE TABLE statements, but does not use or store it.
To add a foreign key constraint within the CREATE TABLE statement for PERSONS:
FOREIGN KEY (surname) REFERENCES names(surnames)
Using an ALTER TABLE statement if the tables already exist:
ALTER TABLE persons
ADD CONSTRAINT FOREIGN KEY (surname) REFERENCES names(surname)
Be aware that if you use the ALTER TABLE statement, the data in the PERSONS table can only contain surname values that exist in the NAMES.surname table - it can not be applied until after the data has been fixed.
For MyISAM tables you can achieve desired functionality by using triggers.
For instance (validate insert),
DELIMITER //
CREATE DEFINER=`root`#`localhost` TRIGGER BEFORE INSERT ON persons
FOR EACH ROW
BEGIN
DECLARE tmp_surname varchar(100);
SELECT surname into tmp_surname FROM names WHERE surname = NEW.surname;
IF (tmp_surname IS NULL) THEN
INSERT INTO t1(id,value) VALUES('aaa'); #raise an 'exception'
END IF;
END;//
delimiter;
Mysql doesn't have exceptions, but you can terminate execution(and, consequently, 'rollback' changes) by creating an invalid statement