Oracle Sql Check Constraint - sql

What I want to do is simple and below are details. I have two tables.
Create Table Event(
IDEvent number (8) primary key,
StartDate date not null,
EndDate date not null
);
This is fine.
Here is second table.
Create Table Game(
IDGame number (8) primary key,
GameDate date not null,
constraint checkDate
check (GameDate >= to_date(StartDate references from Event(StartDate)))
);
The constraint checkDate is to check if the date is bigger than the startdate. While checking I'm getting error : Missing right parenthesis.
My question is, If this is possible to do then why it is giving me an error?

A check constraint in a table can only verify conditions on the columns of that particular table. You can not refer to columns from other tables.
If you need to verify conditions that involves columns from a different table, you can do it from a before insert/update trigger on that table.

What you want to do is far from simple.
The syntax you propose, doesn't work on any RDBMS. It would be nice to have, but none of the RDBMS vendors have implemented it, because it enforcing such a cross table integrity rule would mean locking the referenced table while updating the game table. If you try to build it yourself, you'll have to do the locking yourself. You'll have to take into account all actions that could possibly violate your rule, such as:
inserting a game
updating the gamedate to a less recent date
updating the event startdate to a more recent date
deleting an event
And for each of these actions you'll have to think of writing code that is multi user proof, by locking the right records in the other table.
If you want to reduce this complexity, you might want to look at a product called RuleGen (www.rulegen.com)
Or you may want to build a specific API and include the checks in just the right places. You'll still have to manually lock yourself in this scenario.
Hope this helps.
Regards,
Rob.

There is one hack that you can make, but I doubt that performance of inserting games or events will be acceptable, once the tables grow to a certain size:
CREATE TABLE Event
(
IDEvent NUMBER(8) PRIMARY KEY,
StartDate DATE NOT NULL,
EndDate DATE NOT NULL
);
CREATE TABLE Game
(
IDGame NUMBER(8) PRIMARY KEY,
GameDate DATE NOT NULL,
eventid NUMBER(8), -- this is different to your table definition
CONSTRAINT fk_game_event FOREIGN KEY (eventid) REFERENCES event (idevent)
);
CREATE INDEX game_eventid ON game (eventid);
CREATE MATERIALIZED VIEW LOG ON event
WITH ROWID, SEQUENCE (idevent, startdate) INCLUDING NEW VALUES;
CREATE MATERIALIZED VIEW LOG ON game
WITH ROWID, SEQUENCE (idgame, eventid, gamedate) INCLUDING NEW VALUES;
CREATE MATERIALIZED VIEW mv_event_game
REFRESH FAST ON COMMIT WITH ROWID
AS
SELECT ev.idevent,
ev.startdate,
g.gamedate
FROM event ev, game g
WHERE g.eventid = ev.idevent;
ALTER TABLE mv_event_game
ADD CONSTRAINT check_game_start check (gamedate >= startdate);
Now any transaction that inserts a game that starts before the referenced event will throw an error when trying to commit the transaction:
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning and OLAP options
SQL> INSERT INTO event
2 (idevent, startdate, enddate)
3 values
4 (1, date '2012-01-22', date '2012-01-24');
1 row created.
SQL>
SQL> INSERT INTO game
2 (idgame, eventid, gamedate)
3 VALUES
4 (1, 1, date '2012-01-01');
1 row created.
SQL> commit;
commit
*
ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-02290: check constraint (FOOBAR.CHECK_GAME_START) violated
But again: This will make inserts in both tables slower as the query inside the mview needs to be run each time a commit is performed.
I wasn't able to change the refresh type to FAST which probably would improve commit performance.

Related

Conditional Unique Constraint on Memory Optimized Tables

I am trying to keep integrity in a MEMORY OPTIMIZED table I have. In that table is a foreign key (uniqueidentifier) that points to another table and an Active flag (bit) denoting whether the record is active or not.
I want to stop inserts from happening if the incoming record has the same foreign key as an existing record, only if the existing record is active (Active = 1).
Because this is a memory optimized table, I am limited in how I can go about this. I have tried creating a unique index and discovered they are not allowed in memory optimized tables.
UPDATE:
I ended up using a stored procedure to solve my problem. The stored procedure will do the check for me prior to the insert or update of a record.
Most folks get around the limitations of In-Memory table constraints using triggers. There are a number of examples listed here:
https://www.mssqltips.com/sqlservertip/3080/workaround-for-lack-of-support-for-constraints-on-sql-server-memoryoptimized-tables/
Specifically for your case this will mimic a unique constraint for insert statements but the poster has examples for update and delete triggers as well in the link above.
-- note the use of checksum to make a single unique value from the combination of two columns
CREATE TRIGGER InMemory.TR_Customers_Insert ON InMemory.Customers
WITH EXECUTE AS 'InMemoryUsr'
INSTEAD OF INSERT
AS
SET NOCOUNT ON
--CONSTRAINT U_OnDisk_Customersg_1 UNIQUE NONCLUSTERED (CustomerName, CustomerAddress)
IF EXISTS (
-- Check if rows to be inserted are consistent with CHECK constraint by themselves
SELECT 0
FROM INSERTED I
GROUP BY CHECKSUM(I.CustomerName, I.CustomerAddress)
HAVING COUNT(0) > 1
UNION ALL
-- Check if rows to be inserted are consistent with UNIQUE constraint with existing data
SELECT 0
FROM INSERTED I
INNER JOIN InMemory.tblCustomers C WITH (SNAPSHOT)
ON C.ChkSum = CHECKSUM(I.CustomerName, I.CustomerAddress)
)
BEGIN
;THROW 50001, 'Violation of UNIQUE Constraint! (CustomerName, CustomerAddress)', 1
END
INSERT INTO InMemory.tblCustomers WITH (SNAPSHOT)
( CustomerID ,
CustomerName ,
CustomerAddress,
chksum
)
SELECT NEXT VALUE FOR InMemory.SO_Customers_CustomerID ,
CustomerName ,
CustomerAddress,
CHECKSUM(CustomerName, CustomerAddress)
FROM INSERTED
GO

Inserting values into the table

I'm trying to insert the values into the tables that I created.
This is the values that I'm trying to insert.
INSERT INTO DDR_Rental (customer_ID, rental_date, rent_fee, film_title, start_date, expiry_date, rating)
VALUES (12345, '12-Mar-19', '4.99', 'Peppermint', '12-Mar-19', '22-Mar-19', 4);
This is the datatypes and the constraints.
CREATE TABLE DDR_Rental
(customer_ID NUMBER(5),
rental_date DATE,
rent_fee NUMBER(3,2) CONSTRAINT SYS_RENTAL_FEE_NN NOT NULL,
film_title VARCHAR2(20),
start_date DATE,
expiry_date DATE,
rating NUMBER(5),
CONSTRAINT SYS_RENTAL_PK PRIMARY KEY ((customer_ID), (rental_date), (film_title)),
CONSTRAINT SYS_RENTAL_CUS_ID_FK1 FOREIGN KEY (customer_ID) REFERENCES
DDR_CUSTOMER(CUSTOMER_ID),
CONSTRAINT SYS_RENTAL_FILM_TITLE_FK2 FOREIGN KEY (film_title) REFERENCES
DDR_MOVIE_TITLE(FILM_TITLE),
CONSTRAINT SYS_RENTAL_EXP_DATE_CK CHECK (expiry_date >= start_date),
CONSTRAINT SYS_RENTAL_START_DATE_CK CHECK (start_date >= rental_date),
CONSTRAINT SYS_RENTAL_RATING_CK CHECK (REGEXP_LIKE(rating,('[12345]'))));
The error says unique constraint (CPRG250.SYS_RENTAL_PK) violated
It seems like you are trying to add a duplicate rental event for the same film by the same customer on exact one day. That can obviously happen, if you allow in your business logic the situation that a customer can rent a movie, give it back on the same day and rent it back again.
Knowing your business, you have 2 ways to deal with this situation:
Your business model don't allow that. This means that this is a duplicate record and you shouldn't add currently existing record, in which case showing that error is perfectly fine and doesn't allow for duplicates, since this event happened only once.
Your business model allows that. In this case, you should modify your rental_date column to store time along with the date, instead of only storing date, so that you know when the rental event actually happen. You could use datetime type for example to store date with time. This can be done when creating your table, just replace rental_date date with rental_date datetime. If the table is already created you will need to drop and recreate PRIMARY KEY and then after that you could change type of your column using ALTER TABLE ddr_rental ALTER COLUMN rental_date datetime and re-create the primary key. Check values stored in your table after that, since 2019-01-01 will now be represented as 2019-01-01 00:00:00.000 appending the time which wasn't specified before.
In addition to (1) you could wrap your code and handle this exception to return a clear message when this happens, showing that the movie has already been rented.
Moreover, since you don't have a table for storing movies in your inventory, this can lead to a possible mistake, since you may have more than 1 copy of a movie. In this case I suggest that you create separate film and film_copy tables to properly identify which copy of a film has been rented, so that you can rent another copy.
You have a unique constraint in your table. Your table already has a record with customer_id, rental_date and film_title that you want to insert.
Try this query and you will see that there already is a record
select * from DDR_Rental
where customer_id=12345 and rental_date='12-Mar-19' and film_title='Peppermint'

RDBMS primary key design for row versioning

I want to design primary key for my table with row versioning. My table contains 2 main fields : ID and Timestamp, and bunch of other fields. For a unique "ID" , I want to store previous versions of a record. Hence I am creating primary key for the table to be combination of ID and timestamp fields.
Hence to see all the versions of a particular ID, I can give,
Select * from table_name where ID=<ID_value>
To return the most recent version of a ID, I can use
Select * from table_name where ID=<ID_value> ORDER BY timestamp desc
and get the first element.
My question here is, will this query be efficient and run in O(1) instead of scanning the entire table to get all entries matching same ID considering ID field was a part of primary key fields? Ideally to get a result in O(1), I should have provided the entire primary key. If it does need to do entire table scan, then how else can I design my primary key so that I get this request done in O(1)?
Thanks,
Sriram
The canonical reference on this subject is Effective Timestamping in Databases:
https://www.cs.arizona.edu/~rts/pubs/VLDBJ99.pdf
I usually design with a subset of this paper's recommendations, using a table containing a primary key only, with another referencing table that has that key as well change_user, valid_from and valid_until colums with appropriate defaults. This makes referential integrity easy, as well as future value insertion and history retention. Index as appropriate, and consider check constraints or triggers to prevent overlaps and gaps if you expose these fields to the application for direct modification. These have an obvious performance overhead.
We then make a "current values view" which is exposed to developers, and is also insertable via an "instead of" trigger.
It's far easier and better to use the History Table pattern for this.
create table foo (
foo_id int primary key,
name text
);
create table foo_history (
foo_id int,
version int,
name text,
operation char(1) check ( operation in ('u','d') ),
modified_at timestamp,
modified_by text
primary key (foo_id, version)
);
Create a trigger to copy a foo row to foo_history on update or delete.
https://wiki.postgresql.org/wiki/Audit_trigger_91plus for a full example with postgres

Condidate for Check Constraint

I have a table that holds Tasks for a particular person.
TaskID INT PK
PersonID INT (FK to Person Table)
TaskStatusID INT (FK To list of Statuses)
Deleted DATETIME NULL
The business rule is that a person can not have more than one active task at a time. A task is 'Active' based on it's TaskStatusID. The statuses are:
'5=New, 6=In 7=Progress, 8=Under 9=Review, 10=Complete, 11=Cancelled'
These are values in my Status table.
So, 5,6,7,8 and 9 are Active tasks. These rest are finalised.
A person can only have one task which is in an active state.
So, to test if I can add a task for this person, I would do:
CASE EXISTS(SELECT * FROM Task WHERE PersonID = 123 AND TaskStatusIN IN (5,6,7,8,9)) THEN 0 ELSE 1 END AS CanAdd
The table has a lot of rows. Around 200,000.
I was thinking of adding a Check Constraint on this table, so on update/insert, I make that query to see if the row being added/edited will break the data integrity with regards the business rules.
Is a check constraint suitable for this, or is there a more efficient way to keep the data integral.
Something like:
ADD CONSTRAINT chk_task CHECK (
EXISTS(SELECT * FROM Task WHERE PersonID = ?? AND TaskStatusIN IN (5,6,7,8,9)))
You can't easily do it with a check constraint because they only (naturally) can make assertions about columns within the same row. There are some kludgy ways to get around that by using a UDF to query other rows but most implementations I've seen have odd edge cases where it's possible to work around the UDF and end up with invalid rows after all.
What you can do is to create an indexed view that maintains the constraint:
create table dbo.Tasks (
TaskID INT not null primary key,
PersonID INT not null,
TaskStatusID INT not null,
Deleted DATETIME NULL
)
go
create view dbo.DRI_Tasks_OneActivePerPerson
with schemabinding
as
select PersonID from dbo.Tasks
where TaskStatusID IN (5,6,7,8,9)
go
create unique clustered index UX_DRI_Tasks_OneActivePerPerson
on dbo.DRI_Tasks_OneActivePerPerson (PersonID)
And now this insert succeeds (because there's only one row with an active status for person 1:
insert into dbo.Tasks (TaskID,PersonID,TaskStatusID)
values (1,1,5),(2,1,1),(3,1,4)
But this insert fails:
insert into dbo.Tasks (TaskID,PersonID,TaskStatusID)
values (4,2,6),(5,2,8)
With the message:
Cannot insert duplicate key row in object 'dbo.DRI_Tasks_OneActivePerPerson'
with unique index 'UX_DRI_Tasks_OneActivePerPerson'.
The duplicate key value is (2).
If you are using SQL Server 2008 or later version, you could create a unique filtered index:
CREATE UNIQUE INDEX UQ_ActiveStatus
ON dbo.Task (PersonID)
WHERE TaskStatusID IN (5, 6, 7, 8, 9);
It would act as a unique constraint specifically for rows with the specified statuses. You would only be able to have one of the specified statuses per person.
You can use above check constraint, but the best methodology I will suggest good to write dml trigger, before insert/before update, that one raise the statement.

SQL Server Update Trigger only if its Update and only for specific values in the column

There is a certain job that will insert and update this table called ContactInfo (with 2 columns - Id, EmailId) several times a day.
What's a good way to write a trigger on this table to revert back the EmailId for only specific Ids, whenever only those EmailIds for those Ids get updated?
Don't mind hard-coding those Ids in the trigger since the list is about 40.
But specifically concerned about trigger not firing for every update, since updates happen all the time, and don't want the trigger to cause resource issues.
Additional info: table has about 600k entries and is indexed on Id.
In summary: is it possible for the trigger to get fired only when certain values are updated in the column, and not any update on the column.
One alternative mechanism you might consider would be adding another table, called, say, LockedValues. I'm a bit unsure from your narrative what values you're trying to prevent changes to, but here's an example:
Table T, contains two columns, ID and Val:
create table T (
ID int not null,
Val int not null,
constraint PK_T PRIMARY KEY (ID),
constraint UK_T_Lockable UNIQUE (ID,Val)
)
And we have 3 rows:
insert into T(ID,Val)
select 1,10 union all
select 2,20 union all
select 3,30
And we want to prevent the row with ID 2 from having it's Val changed:
create table Locked_T (
ID int not null,
Val int not null,
constraint UQ_Locked_T UNIQUE (ID,Val), --Only really need an index here
constraint FK_Locked_T_T FOREIGN KEY (ID,Val) references T (ID,Val)
)
insert into Locked_T (ID,Val) select 2,20
And so now, of course, any application that is only aware of T will be unable to edit the row with ID 2, but can freely alter rows 1 and 3.
This has the benefit that the enforcement code is built into SQL Server already, so probably quite efficient. You don't need a unique key on Locked_T, but it should be indexed so that it's quite quick to detect that values aren't present.
This all assumes that you were going to write a trigger that rejected changes, rather than one that reverted changes. For that, you'd still have to write a trigger (though I'd still suggest having this separate table, and then writing your trigger to do an update inner joining inserted with Locked_T - which should be quite efficient still).
(Be warned, however: Triggers that revert changes are evil)