I have problems using a CHECK constraint. I have two tables
Users Table
userid | register_date
Activity table
id | userid | activity_date
I need to put a constraint that disallows the insertion of an activity_date which is less than a register_date. I could do it with a CHECK constraint if they were in same table. But, how do you do it for two different tables? (also Oracle disallows sub-queries in a check constraint).
Is there any other way to perform this action?
The simplest way is to have a trigger:
create or replace trigger tr_activity
before update or insert of activity_date on activity
for each row
declare
l_register_date users.register_date%type;
begin
select register_date into l_register_date
from users
where id = :new.userid
;
if :new.activity_date < l_register_date then
raise_application_error(-20000, 'Stop attempting the impossible');
end if;
end tr_activity;
/
But, this seems a little strange; I would assume that you're only ever inserting the current date into the activity date, which means that the registration date will always be before the activity date, unless it's been updated prior. I would simply ensure that the activity date is never updated or inserted in your application code and use a default value in the table:
alter table activity modify activity_date default sysdate not null
Related
CREATE OR REPLACE TRIGGER TRIGGER_RENEWAL
AFTER UPDATE OF NEXT_RENEW_DATE ON SUBSCRIPTION_CUSTOMER
REFERENCING OLD AS OLD NEW AS NEW
For each row
BEGIN
update Subscription_log
Set next_renew_date = :NEW.next_renew_date, previous_renew_date = :OLD.next_renew_date;
"Where rowid = updated row;"
END;
The table "Subscriptio_log" has a column "Phone_number" and "Email_address" referenced from the "Subscription_Customer" as a FK and PK
The case here is that I'd like to trigger an update on a log table whenever the subscription customer makes an update of their next renewal date.
The problem I'm facing is that I can't figure a way to select only specific rows to update the value of next_renew_date and previous_renew_date.
Is there a way to select the rowID or other ways to update based on the FK "Phone_number" and "Email_address"?
If your PHONE | EMAIL combination uniquely defines a row in your log table and if the same data is accesssible within your database trigger than you could update the unique record in your log. If that is not the case then all the log records having same PHONE | MAIL combination will be updated.
This structure of the log keeps just last two changes made to your customer data (previous and next date columns) making all other history changes definetely lost.
Log table - Sample data
PHONE
EMAIL
PREV_DATE
NEXT_DATE
1915555678
john.doe#domain.com
07-JUL-20
23-FEB-21
1995555001
jane.doe#domain.com
12-APR-19
12-SEP-22
Assuming that column PREV_DATE should be overwritten with value of NEXT_DATE column within same row of log table (if not - you can define the update's SET PREV_DATE command differently) you can try something like below:
Update
subscription_log
Set
PREV_DATE = NEXT_DATE,
NEXT_DATE = your_desired_new_next_date_value
Where
PHONE = subs_customer_trigg_phone_value And
EMAIL = subs_customer_trigg_email_value
Regards...
I want to restrict insertion in my table based on some condition.
My table is like
col1 col2 Date Create
A 1 04/05/2016
B 2 04/06/2016
A 3 04/08/2016 -- Do not allow insert
A 4 04/10/2016 -- Allow insert
So I want to restrict insert based on the number of days the same record was inserted earlier.
As shown in able example, A can be inserted again in table only after 4 days of previous insertion not before that.
Any pointers how I can do this in SQL/Oracle.
You only want to insert when there not exists a record with the same col1 and a too recent date:
insert into mytable (col1, col2, date_create)
select 'B' as col1, 4 as col2, trunc(sysdate) as date_create from dual ins
where not exists
(
select *
from mytable other
where other.col1 = ins.col1
and other.date_create > ins.date_create - 4
);
An undesired record would not be inserted thus. However, no exception would be raised. If you want that, I'd suggest a PL/SQL block or a before insert trigger.
If several processes write to your table simultaneously with possibly conflicting data then oracle database should do the job.
This can be solved by defining a constraint to check if there already exists an entry with the same col1 value younger than four days.
As far as I know, it is not possible to define such a constraint directly. Instead, define a materialized view and add a constraint on this view.
create materialized view mytable_mv refresh on commit as
select f2.col1, f2.date_create, f1.date_create as date_create_conflict
from mytable f2, mytable f1
where f2.col1 = f1.col1
and f2.date_create > f1.date_create
and f2.date_create - f1.date_create < 4;
This materialized view will contain an entry, if and only if a conflict exists.
Now define a constraint on this view:
alter table mytable_mv add constraint check_date_create
check(date_create=
date_create_conflict) deferrable;
It is executed when the current transaction is commited (because the materialized view is refreshed - as declared above refresh on commit).
This works fine if you insert into your table mytable in an autonomous transaction, e.g. for a logging table.
In other cases, you can force the refresh on the materialized view by dbms_mview.refresh('mytable_mv') or use another option than refresh on commit.
Here is my structure (with values):
user_eval_history table
user_eval_id | user_id | is_good_eval
--------------+---------+--------------
1 | 1 | t
2 | 1 | t
3 | 1 | f
4 | 2 | t
user_metrics table
user_metrics_id | user_id | nb_good_eval | nb_bad_eval
-----------------+---------+--------------+-------------
1 | 1 | 2 | 1
2 | 2 | 1 | 0
For access time (performance) reasons I want to avoid recomputing user evaluation from the history again and again.
I would like to store/update the sums of evaluations (for a given user) everytime a new evaluation is given to the user (meaning everytime there is an INSERT in the user_eval_history table I want to update the user_metrics table for the corresponding user_id).
I feel like I can achieve this with a trigger and a stored procedure but I'm not able to find the correct syntax for this.
I think I need to do what follows:
1. Create a trigger on user metrics:
CREATE TRIGGER update_user_metrics_trigger AFTER INSERT
ON user_eval_history
FOR EACH ROW
EXECUTE PROCEDURE update_user_metrics('user_id');
2. Create a stored procedure update_user_metrics that
2.1 Computes the metrics from the user_eval_history table for user_id
SELECT
user_id,
SUM( CASE WHEN is_good_eval='t' THEN 1 ELSE 0) as nb_good_eval,
SUM( CASE WHEN is_good_eval='f' THEN 1 ELSE 0) as nb_bad_eval
FROM user_eval_history
WHERE user_id = 'user_id' -- don't know the syntax here
2.2.1 Creates the entry into user_metrics if not already existing
INSERT INTO user_metrics
(user_id, nb_good_eval, nb_bad_eval) VALUES
(user_id, nb_good_eval, nb_bad_eval) -- Syntax?????
2.2.2 Updates the user_metrics entry if already existing
UPDATE user_metrics SET
(user_id, nb_good_eval, nb_bad_eval) = (user_id, nb_good_eval, nb_bad_eval)
I think I'm close to what is needed but don't know how to achieve this. Especially I don't know about the syntax.
Any idea?
Note: Please, no "RTFM" answers, I looked up for hours and didn't find anything but trivial examples.
First, revisit the assumption that maintaining an always current materialized view is a significant performance gain. You add a lot of overhead and make writes to user_eval_history a lot more expensive. The approach only makes sense if writes are rare while reads are more common. Else, consider a VIEW instead, which is more expensive for reads, but always current. With appropriate indexes on user_eval_history this may be cheaper overall.
Next, consider an actual MATERIALIZED VIEW (Postgres 9.3+) for user_metrics instead of keeping it up to date manually, especially if write operations to user_eval_history are very rare. The tricky part is when to refresh the MV.
Your approach makes sense if you are somewhere in between, user_eval_history has a non-trivial size and you need user_metrics to reflect the current state exactly and close to real-time.
Still on board? OK. First you need to define exactly what's allowed / possible and what's not. Can rows in user_eval_history be deleted? Can the last row of a user in user_eval_history be deleted? Probably yes, even if you would answer "No". Can rows in user_eval_history be updated? Can user_id be changed? Can is_good_eval be changed? If yes, you need to prepare for each of these cases.
Assuming the trivial case: INSERT only. No UPDATE, no DELETE. There is still the possible race condition you have been discussing with #sn00k4h. You found an answer to that, but that's really for INSERT or SELECT, while you have a classical UPSERT problem: INSERT or UPDATE:
FOR UPDATE like you considered in the comments is not the silver bullet here. UPDATE user_metrics ... locks the row it updates anyway. The problematic case is when two INSERTs try to create a row for a new user_id concurrently. You cannot lock key values that are not present in the unique index, yet, in Postgres. FOR UPDATE can't help. You need to prepare for a possible unique violation and retry as discussed in these linked answers:
Upsert with a transaction
How to UPSERT (MERGE, INSERT ... ON DUPLICATE UPDATE) in PostgreSQL?
Code
Assuming these table definitions:
CREATE TABLE user_eval_history (
user_eval_id serial PRIMARY KEY
, user_id int NOT NULL
, is_good_eval boolean NOT NULL
);
CREATE TABLE user_metrics (
user_metrics_id -- seems useless
, user_id int PRIMARY KEY
, nb_good_eval int NOT NULL DEFAULT 0
, nb_bad_eval int NOT NULL DEFAULT 0
);
First, you need a trigger function before you can create a trigger.
CREATE OR REPLACE FUNCTION trg_user_eval_history_upaft()
RETURNS trigger
LANGUAGE plpgsql AS
$func$
BEGIN
LOOP
IF NEW.is_good_eval THEN
UPDATE user_metrics
SET nb_good_eval = nb_good_eval + 1
WHERE user_id = NEW.user_id;
ELSE
UPDATE user_metrics
SET nb_bad_eval = nb_bad_eval + 1
WHERE user_id = NEW.user_id;
END IF;
EXIT WHEN FOUND;
BEGIN -- enter block with exception handling
IF NEW.is_good_eval THEN
INSERT INTO user_metrics (user_id, nb_good_eval)
VALUES (NEW.user_id, 1);
ELSE
INSERT INTO user_metrics (user_id, nb_bad_eval)
VALUES (NEW.user_id, 1);
END IF;
RETURN NULL; -- returns from function, NULL for AFTER trigger
EXCEPTION WHEN UNIQUE_VIOLATION THEN -- user_metrics.user_id is UNIQUE
RAISE NOTICE 'It actually happened!'; -- hardly ever happens
END;
END LOOP;
RETURN NULL; -- NULL for AFTER trigger
END
$func$;
In particular, you don't pass user_id as parameter to the trigger function. The special variable NEW holds values of the triggering row automatically. Details in the manual here.
Trigger:
CREATE TRIGGER upaft_update_user_metrics
AFTER INSERT ON user_eval_history
FOR EACH ROW EXECUTE PROCEDURE trg_user_eval_history_upaft();
What is the best way to enforce key uniqueness in a temporal table (Oracle DBMS). A temporal table is one where all historical states are recorded with a time-span.
For example, we have a Key --> Value association like this ...
create table TEMPORAL_VALUES
(KEY1 varchar2(99) not null,
VALUE1 varchar2(99),
START_PERIOD date not null,
END_PERIOD date not null);
There are two constraints to enforce to do with the temporal nature of the table, to wit:
For each record we must have END_PERIOD > START_PERIOD. This is the period for which the Key->Value map is valid.
For each Key, there can't be any overlapping periods. The period includes the moment of the START_PERIOD, but excludes the exact moment of the END_PERIOD.
Constraint enforcement could be done either on row insert/update, or on commit. I don't really care, as long as it is impossible to commit invalid data.
I've been informed that the best practice to enforce constraints like this is to use materialized views instead of triggers.
Please advise on what is the best way to achieve this?
The Oracle banner is ...
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
What I have tried so far
I think that this solution is close, but it doesn't really work because 'on commit' is needed. Oracle doesn't seem capable of creating a materialized view of this complexity which refreshes on commit.
create materialized view OVERLAPPING_VALUES
nologging cache build immediate
refresh complete on demand
as select 'Wrong!'
from
(
select KEY1, END_PERIOD,
lead( START_PERIOD, 1) over (partition by KEY1 order by START_PERIOD) as NEXT_START
from TEMPORAL_VALUES
)
where NEXT_START < END_PERIOD;
alter table OVERLAPPING_VALUES add CHECK( 0 = 1 );
What am I doing wrong? How do I get this work on commit to prevent invalid rows in TEMPORAL_VALUES?
After some struggling, experimentation and guidance from this forum post,
drop table TEMPORAL_VALUE;
create table TEMPORAL_VALUE
(KEY1 varchar2(99) not null,
VALUE1 varchar2(99),
START_PERIOD date not null,
END_PERIOD date
)
/
alter table TEMPORAL_VALUE add
constraint CHECK_PERIOD check ( END_PERIOD is null or END_PERIOD > START_PERIOD)
/
alter table TEMPORAL_VALUE add
constraint PK_TEMPORAL_VALUE primary key (KEY1, START_PERIOD)
/
alter table TEMPORAL_VALUE add
constraint UNIQUE_END_PERIOD unique (KEY1, END_PERIOD)
/
create materialized view log on TEMPORAL_VALUE with rowid;
drop materialized view OVERLAPPING_VALUES;
create materialized view OVERLAPPING_VALUES
build immediate refresh fast on commit as
select a.rowid a_rowid, b.rowid b_rowid
from TEMPORAL_VALUE a, TEMPORAL_VALUE b
where a.KEY1 = b.KEY1
and a.rowid <> b.rowid
and a.START_PERIOD <= b.START_PERIOD
and (a.END_PERIOD is null or (a.END_PERIOD > b.START_PERIOD));
alter table OVERLAPPING_VALUES add CHECK( 0 = 1 );
Why does this work?
Why does this work, but my original posted view ...
select KEY1, END_PERIOD,
lead( START_PERIOD, 1) over (partition by KEY1 order by START_PERIOD) as NEXT_START
from TEMPORAL_VALUES
... will not be accepted as an On-Commit materialized view? Well, there answer is that there appears to be limits in the complexity of on-commit materialized views. The views must include the row id's or keys of the underlying table, and not be over some threshold of complexity.
There is a technique I've seen described for SQL Server (see this article and search for "Kuznetsov's History Table") which adds a third time column, previous_end_period that you can use to establish a foreign key on the table itself to enforce the constraint that the intervals can't overlap. I don't know if this can be adapted to Oracle.
Nice solution Sean!
But I would add comments to your objects due to the complexity… something like:
COMMENT ON COLUMN TEMPORAL_VALUE.KEY IS 'Each key may have at most only one value for any instant in time';
COMMENT ON COLUMN TEMPORAL_VALUE.START_PERIOD IS 'The period described includes the START_PERIOD date/time';
COMMENT ON COLUMN TEMPORAL_VALUE.END_PERIOD IS 'The period described does not included the END_PERIOD date/time. A null end period means until forever';
COMMENT ON COLUMN TEMPORAL_VALUE IS 'Integrity is enforced by the MATERIALIZED VIEW OVERLAPPING_VALUES';
COMMENT ON MATERIALIZED VIEW OVERLAPPING_VALUES IS 'Used to enforce the rule - each key may have at most only one value for any instant in time. This is an [on commit] mv, that holds any temporal values that overlaps another (for the same key), but the CHECK(0=1) constraint will raise an exception if any rows are found, stopping any commit that would break integrity';
I personally like to prefix all materialized view names with MV_ and views with V_
Interesting that you don’t allow START_PERIOD to be null. Most implementations would allow a null start and a non-null end to specify the period everything before, and null values for both bates to indicate a constant value for a key.
I have a table with a column which contains a 'valid until' Date and I want to make sure that this can only be set to null in a single row within the table. Is there an easy way to do this?
My table looks like this (postgres):
CREATE TABLE 123.myTable(
some_id integer NOT NULL,
valid_from timestamp without time zone NOT NULL DEFAULT now(),
valid_until timestamp without time zone,
someString character varying)
some_id and valid_from is my PK. I want nobody to enter a line with a null value in column valid_until if there is already a line with null for this PK.
Thank you
In PostgreSQL, you have two basic approaches.
Use 'infinity' instead of null. Then your unique constraint works as expected. Or if you cannot do that:
CREATE UNIQUE INDEX null_valid_from ON mytable(someid) where valid_until IS NULL
I have used both approaches. I find usually the first approach is cleaner and it allows you to use range types and exclude constraints in newer versions of PostgreSQL better (to ensure no two time ranges overlap based on a given given someid), bt the second approach often is useful where the first cannot be done.
Depending on the database, you can't have null in a primary key (I don't know about all databases, but in sql server you can't). The easiest way around this I can think of is to set the date time to the minimum value, and then add a unique constraint on it, or set it to be the primary key.
I suppose another way would be to set up a trigger to check the other values in the table to see if another entry is null, and if there is one, don't allow the insert.
As Kevin said in his answer, you can set up a database trigger to stop someone from inserting more than one row where the valid until date is NULL.
The SQL statement that checks for this condition is:
SELECT COUNT(*)
FROM TABLE
WHERE valid until IS NULL;
If the count is not equal to 1, then your table has a problem.
The process that adds a row to this table has to perform the following:
Find the row where the valid until value is NULL
Update the valid until value to the current date, or some other meaningful date
Insert the new row with the valid until value set to NULL
I'm assuming you are Storing Effective-dated-records and are also using a valid from date.
If so, You could use CRUD stored procedures to enforce this compliance. E.G the insert closes off any null valid dates before inserting a new record with a null valid date.
You probably need other stored procedure validation to avoid overlapping records and to allow deleting and editing records. It may be more efficient (in terms of where clauses / faster queries) to use a date far in the future rather than using null.
I know only Oracle in sufficient detail, but the same might work in other databases:
create another column which always contains a fixed value (say '0') include this column in your unique key.
Don't use NULL but a specific very high or low value. I many cases this is actually easier to use then a NULL value
Make a function based unique key on a function converting the date including the null value to some other value (e.g. a string representation for dates and 'x' for null)
make a materialized view which gets updated on every change on your main table and put a constraint on that view.
select count(*) cnt from table where valid_until is NULL
might work as the select statement. And a check constraint limiting the cnt value to the values 0 and 1
I would suggest inserting to that table through an SP and putting your constraint in there, as triggers are quite hidden and will likely be forgotten about. If that's not an option, the following trigger will work:
CREATE TABLE dbo.TESTTRIGGER
(
YourDate Date NULL
)
CREATE TRIGGER DupNullDates
ON dbo.TESTTRIGGER
FOR INSERT, UPDATE
AS
DECLARE #nullCount int
SELECT #nullCount = (SELECT COUNT(*) FROM TESTTRIGGER WHERE YourDate IS NULL)
IF(#NullCount > 1)
BEGIN
RAISERROR('Cannot have Multiple Nulls', 16, 1)
ROLLBACK TRAN
END
GO
Well if you use MS SQL you can just add a unique Index on that column. That will allow only one NULL. I guess that if you use other RDBMS, this will still function.