How to substitute a variable when creating a check constraint? - sql

I need to add a required field for newly added rows. However, it is undesirable to set the default value for old rows due to the large size of the table. I need to provide an automated script that will do this.
I tried this, but it does not work:
do $$
declare
max_id int8;
begin
select max(id) into max_id from transactions;
alter table transactions add constraint check_process_is_assigned check (id <= max_id or process_id is not null);
end $$;

Utility commands like ALTER TABLE do not accept parameters. Only the basic DML commands SELECT, INSERT, UPDATE, DELETE do.
See:
set "VALID UNTIL" value with a calculated timestamp
“ERROR: there is no parameter $1” in “EXECUTE .. USING ..;” statement in plpgsql
Creating user with password from variables in anonymous block
You need dynamic SQL like:
DO
$do$
BEGIN
EXECUTE format(
'ALTER TABLE transactions
ADD CONSTRAINT check_process_is_assigned CHECK (id <= %s OR process_id IS NOT NULL)'
, (SELECT max(id) FROM transactions)
);
END
$do$;
db<>fiddle here
This creates a CHECK constraint based on the current maximum id.
Maybe a NOT VALID constraint would serve better? That is not checked against existing rows:
ALTER TABLE transactions
ADD CONSTRAINT check_process_is_assigned CHECK (process_id IS NOT NULL) NOT VALID;
But you do have to "fix" old rows that get updated in this case. (I.e. assign a value to process_id if it was NULL so far.) See:
Enforce NOT NULL for set of columns with a CHECK constraint only for new rows
Best way to populate a new column in a large table?

Related

How to add a constraint that checks the sum in Oracle SQL?

I'm currently doing a school project in which we need to create a database for a real estate management company. We have an OWNER table, a BUILDING table, and an OWNERSHIP table.
I want to make sure that when I enter a value for the ownership stake percentage, the sum of all ownership stakes from the various owner doesn't exceed 100%. At the moment I have no idea how to do this.
CREATE TABLE Building (
buildingID NUMBER (10) NOT NULL PRIMARY KEY,
qtyUnits NUMBER (3) NOT NULL,
landValue NUMBER (15) NOT NULL,
purchasePrice NUMBER (15) NOT NULL
);
CREATE TABLE Owners (
ownerID NUMBER (5) NOT NULL PRIMARY KEY,
lastName VARCHAR2 (50) NOT NULL,
firstName VARCHAR2 (50) NOT NULL,
telephone VARCHAR2(50) NOT NULL,
email VARCHAR2(10) NOT NULL
);
CREATE TABLE Ownership (
ownerID NUMBER (5) NOT NULL,
buildingID NUMBER (5) NOT NULL,
ownershipStake NUMBER (5,2) NOT NULL,
CONSTRAINT PK_Ownership PRIMARY KEY (ownerID,buildingID)
);
All trigger-related solutions share one problem: as soon as you have more than one user in the system, they are not enough to guarantee that the constraint is upheld. For example, if session A inserts ownershipshare of 51%, and session B inserts ownershipshare of 51%, both these inserts will succeed because neither session has committed. Then both sessions commit and you have a total ownershpshare of 102%.
One way you can get around this is with an ON COMMIT materialized view with a constraint. Unfortunately, I think materialized views are a feature available only in Oracle Enterprise Edition and not Standard or Express. I don't have an EE instance around to test with, but I think this does what you want:
create materialized view log on ownership
with primary key, rowid, sequence
( ownershipstake )
including new values;
create materialized view mv_ownership
refresh fast on commit
as
select buildingid, sum(ownershipstake) as total_ownershipstake, count(*) as count_ownershipstake
from ownership
group by buildingid;
alter materialized view mv_ownership add (
constraint ck_100 check ( total_ownershipstake <= 100 )
);
I went to a little extra work to make the materialized view fast-refreshable, so the whole thing doesn't have to be rebuilt on each commit, just the affected buildingid's.
First of all -- you could use the front-end to manage that in a separate query (i.e. limit the maximum stake by the amount left).
Should you wish to do a database check -- creating a row-level trigger on the Ownership table can help.
EDITED: adding more details
So, maybe you have already discovered that the trigger will encounter "mutating table" and are wondering "what is this guy talking about?"
OK, let me explain: this is not the complete answer to the problem.
My preferred way of dealing with this would be to use a combination of row-level AFTER trigger, an extra supplementary field in the table, and a check constraint.
Add an extra field to the Ownership table -- let's call it owned_pct
Add a check constraint on that field that says owned_pct <= 100
Create a row-level AFTER trigger that will update this value, e.g. for INSERT: update Ownership set owned_pct= nvl(owned_pct,0)+:new.ownership_pct where building_id = :new.building_id;
Note that there will be slightly different update queries for INSERT / DELETE / UPDATE cases, so make sure to test all of those
This process will try to update the owned_pct column and cause a constraint violation, which will roll back the transaction, including the initial DML statement.
Edit: Originally deleted this when I realized it was not sufficient when more than one session was involved. Undeleting to show an example of a solution that does not exhibit the "mutating table" problem. You'd have to lock the table so only one session could affect it at a time first.
You can do this with an AFTER STATEMENT trigger. That runs once per insert, update, or delete, after the entire statement is complete. That's a little sloppy, because it validates all the rows in the table, even ones that were not affected, but for your purposes is probably good enough.
create or replace trigger trig1
after insert or update on ownership
declare
l_count number;
begin
select count(*) into l_count from (
select buildingid, sum(ownershipstake)
from ownership
group by buildingid
having sum(ownershipstake) > 100
);
if l_count > 0 then
raise_application_error( -20001, 'Totals cant be over 100' );
end if;
end;
/
insert into ownership values ( 1, 1, 99 );
insert into ownership values ( 2, 1, 2 );
Error starting at line : 24 in command -
insert into ownership values ( 2, 1, 2 )
Error report -
ORA-20001: Totals cant be over 100
As I said, this validates the entire table even though I only inserted a row that affected 1 building here. So if you had a million buildings, it validates 999,999 rows unnecessarily and can have a significant performance impact.
An improved way of doing this is a compound trigger, where at the before each row timing point, you would record the building id of the row being changed. Then, at the after statement timing point, you would validate only the buildingids that had been modified.
Use compound trigger
CREATE OR REPLACE TRIGGER IVAN.trades_partial_kontrola_tg
FOR INSERT OR UPDATE OR DELETE ON ivan.trades_partial
COMPOUND TRIGGER
cNic CONSTANT NUMBER(10) := -9999999999;
--CREATE OR REPLACE TYPE IVAN.NUMBER_POLE_TYP as table of number;
lPole ivan.number_pole_typ := ivan.number_pole_typ();
lPole2 ivan.number_pole_typ;
lAmountTrades ivan.trades.amount%TYPE;
lAmountPartial ivan.trades_partial.amount%TYPE;
BEFORE EACH ROW IS
BEGIN
CASE
WHEN updating
AND :new.amount = :old.amount THEN
NULL;
WHEN nvl(:new.amount, cNic) <> nvl(:old.amount, cNic) THEN
lPole.extend();
lPole(lPole.last()) := nvl(:new.id, :old.id);
END CASE;
END BEFORE EACH ROW;
AFTER STATEMENT IS
BEGIN
SELECT DISTINCT column_value BULK COLLECT INTO lPole2 FROM TABLE(lPole);
lPole.delete;
FOR a_cur IN (SELECT * FROM TABLE(lPole2))
LOOP
SELECT t.amount INTO lAmountTrades FROM ivan.trades t WHERE t.id = a_cur.column_value;
SELECT SUM(a.amount) INTO lAmountPartial FROM ivan.trades_partial a WHERE a.id = a_cur.column_value;
IF lAmountPartial <> lAmountTrades
THEN
ivan.log_centralni_pk.myraise('Wrong amount check');
END IF;
END LOOP;
END AFTER STATEMENT;
end;

Trigger to update a different table

Using Postgres 9.4, I have 2 tables streams and comment_replies. I am trying to do is update the streams.comments count each time a new comment_replies is inserted to keep track of the number of comments a particular stream has. I am not getting any errors but when I try to create a new comment it gets ignored.
This is how I am setting up my trigger. stream_id is a foreign key, so every stream_id will correspond to a streams.id which is the primary key of the streams table. I have been looking at this example: Postgres trigger function, but haven't been able to get it to work.
CREATE TABLE comment_replies (
id serial NOT NULL PRIMARY KEY,
created_on timestamp without time zone,
comments text,
profile_id integer,
stream_id integer
);
The trigger function:
CREATE OR REPLACE FUNCTION "Comment_Updates"()
RETURNS trigger AS
$BODY$BEGIN
update streams set streams.comments=streams.comments+1
where streams.id=comment_replies_streamid;
END$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
And the trigger:
CREATE TRIGGER comment_add
BEFORE INSERT OR UPDATE
ON comment_replies
FOR EACH ROW
EXECUTE PROCEDURE "Comment_Updates"();
How can I do this?
There are multiple errors. Try instead:
CREATE OR REPLACE FUNCTION comment_update()
RETURNS trigger AS
$func$
BEGIN
UPDATE streams s
SET streams.comments = s.comments + 1
-- SET comments = COALESCE(s.comments, 0) + 1 -- if the column can be NULL
WHERE s.id = NEW.streamid;
RETURN NEW;
END
$func$ LANGUAGE plpgsql;
CREATE TRIGGER comment_add
BEFORE INSERT OR UPDATE ON comment_replies -- on UPDATE, too? Really?
FOR EACH ROW EXECUTE PROCEDURE comment_update();
You need to consider DELETE as well if that is possible. Also if UPDATE can change stream_id. But why increase the count for every UPDATE? This looks like another error to me.
It's a syntax error to table-qualify the target column in the SET clause of UPDATE.
You need to return NEW in a BEFORE trigger unless you want to cancel the INSERT / UPDATE.
Or you make it an AFTER trigger, which would work for this, too.
You need to reference NEW for the stream_id of the current row (which is automatically visible inside the trigger function.
If streams.comments can be NULL, use COALESCE.
And rather use unquoted, legal, lower-case identifiers.

How to add constraint to sql table so that table has exactly one row

Parameter table is initially created and one row is added in Postgres.
This table should have always one row, otherwise SQL queries using this table will produce incorrect results. DELETE or INSERT to this table are disallowed, only UPDATE is allowed.
How to add single row constraint to this table?
Maybe DELETE and INSERT triggers can raise an exception or is there simpler way?
The following will create a table where you can only insert one single row. Any update of the id column will result in an error, as will any insert with a different value than 42. The actual id value doesn't matter actually (unless there is some special meaning that you need).
create table singleton
(
id integer not null primary key default 42,
parameter_1 text,
parameter_2 text,
constraint only_one_row check (id = 42)
);
insert into singleton values (default);
To prevent deletes you can use a rule:
create or replace rule ignore_delete
AS on delete to singleton
do instead nothing;
You could also use a rule to make insert do nothing as well if you want to make an insert "fail" silently. Without the rule, an insert would generate an error. If you want a delete to generate an error as well, you would need to create a trigger that simply raises an exception.
Edit
If you want an error to be thrown for inserts or deletes, you need a trigger for that:
create table singleton
(
id integer not null primary key,
parameter_1 text,
parameter_2 text
);
insert into singleton (id) values (42);
create or replace function raise_error()
returns trigger
as
$body$
begin
RAISE EXCEPTION 'No changes allowed';
end;
$body$
language plpgsql;
create trigger singleton_trg
before insert or delete on singleton
for each statement execute procedure raise_error();
Note that you have to insert the single row before you create the trigger, otherwise you can't insert that row.
This will only partially work for a superuser or the owner of the table. Both have the privilege to drop or disable the trigger. But that is the nature of a superuser - he can do anything.
To make any table a singleton just add this column:
just_me bool NOT NULL DEFAULT TRUE UNIQUE CHECK (just_me)
This allows exactly one row. Plus add the trigger #a_horse provided.
But I would rather use a function instead of the table for this purpose. Simpler and cheaper.
CREATE OR REPLACE FUNCTION one_row()
RETURNS TABLE (company_id int, company text) LANGUAGE sql IMMUTABLE AS
$$SELECT 123, 'The Company'$$
ALTER FUNCTION one_row() OWNER TO postgres;
Set the owner to the user that should be allowed to change it.
Give a user permission to ALTER a function
Nobody else change it - except superusers of course. Superusers can do anything.
You can use this function just like you would use the table:
SELECT * FROM one_row();
If you need a "table", create a view (which is actually a special table internally):
CREATE VIEW one_row AS SELECT * FROM one_row();
I guess you will not use the PostgreSQL root user in your application so you could simply limit the permissions of your application user on UPDATE for this table.
An INSERT or DELETE will then cause an Insufficient privilege exception.

Oracle - Insert New Row with Auto Incremental ID

I have a workqueue table that has a workid column. The workID column has values that increment automatically. Is there a way I can run a query in the backend to insert a new row and have the workID column increment automatically?
When I try to insert a null, it throws error ORA01400 - Cannot insert null into workid.
insert into WORKQUEUE (facilitycode,workaction,description) values ('J', 'II', 'TESTVALUES')
What I have tried so far - I tried to look at the table details and didn't see any auto-increment. The table script is as follow
"WORKID" NUMBER NOT NULL ENABLE,
Database: Oracle 10g
Screenshot of some existing data.
ANSWER:
I have to thank each and everyone for the help. Today was a great learning experience and without your support, I couldn't have done. Bottom line is, I was trying to insert a row into a table that already has sequences and triggers. All I had to do was find the right sequence, for my question, and call that sequence into my query.
The links you all provided me helped me look these sequences up and find the one that is for this workid column. Thanks to you all, I gave everyone a thumbs up, I am able to tackle another dragon today and help patient care take a step forward!"
This is a simple way to do it without any triggers or sequences:
insert into WORKQUEUE (ID, facilitycode, workaction, description)
values ((select max(ID)+1 from WORKQUEUE), 'J', 'II', 'TESTVALUES')
It worked for me but would not work with an empty table, I guess.
To get an auto increment number you need to use a sequence in Oracle.
(See here and here).
CREATE SEQUENCE my_seq;
SELECT my_seq.NEXTVAL FROM DUAL; -- to get the next value
-- use in a trigger for your table demo
CREATE OR REPLACE TRIGGER demo_increment
BEFORE INSERT ON demo
FOR EACH ROW
BEGIN
SELECT my_seq.NEXTVAL
INTO :new.id
FROM dual;
END;
/
There is no built-in auto_increment in Oracle.
You need to use sequences and triggers.
Read here how to do it right. (Step-by-step how-to for "Creating auto-increment columns in Oracle")
ELXAN#DB1> create table cedvel(id integer,ad varchar2(15));
Table created.
ELXAN#DB1> alter table cedvel add constraint pk_ad primary key(id);
Table altered.
ELXAN#DB1> create sequence test_seq start with 1 increment by 1;
Sequence created.
ELXAN#DB1> create or replace trigger ad_insert
before insert on cedvel
REFERENCING NEW AS NEW OLD AS OLD
for each row
begin
select test_seq.nextval into :new.id from dual;
end;
/ 2 3 4 5 6 7 8
Trigger created.
ELXAN#DB1> insert into cedvel (ad) values ('nese');
1 row created.
You can use either SEQUENCE or TRIGGER to increment automatically the value of a given column in your database table however the use of TRIGGERS would be more appropriate. See the following documentation of Oracle that contains major clauses used with triggers with suitable examples.
Use the CREATE TRIGGER statement to create and enable a database trigger, which is:
A stored PL/SQL block associated with a table, a schema, or the
database or
An anonymous PL/SQL block or a call to a procedure implemented in
PL/SQL or Java
Oracle Database automatically executes a trigger when specified conditions occur. See.
Following is a simple TRIGGER just as an example for you that inserts the primary key value in a specified table based on the maximum value of that column. You can modify the schema name, table name etc and use it. Just give it a try.
/*Create a database trigger that generates automatically primary key values on the CITY table using the max function.*/
CREATE OR REPLACE TRIGGER PROJECT.PK_MAX_TRIGGER_CITY
BEFORE INSERT ON PROJECT.CITY
FOR EACH ROW
DECLARE
CNT NUMBER;
PKV CITY.CITY_ID%TYPE;
NO NUMBER;
BEGIN
SELECT COUNT(*)INTO CNT FROM CITY;
IF CNT=0 THEN
PKV:='CT0001';
ELSE
SELECT 'CT'||LPAD(MAX(TO_NUMBER(SUBSTR(CITY_ID,3,LENGTH(CITY_ID)))+1),4,'0') INTO PKV
FROM CITY;
END IF;
:NEW.CITY_ID:=PKV;
END;
Would automatically generates values such as CT0001, CT0002, CT0002 and so on and inserts into the given column of the specified table.
SQL trigger for automatic date generation in oracle table:
CREATE OR REPLACE TRIGGER name_of_trigger
BEFORE INSERT
ON table_name
REFERENCING NEW AS NEW
FOR EACH ROW
BEGIN
SELECT sysdate INTO :NEW.column_name FROM dual;
END;
/
the complete know how, i have included a example of the triggers and sequence
create table temasforo(
idtemasforo NUMBER(5) PRIMARY KEY,
autor VARCHAR2(50) NOT NULL,
fecha DATE DEFAULT (sysdate),
asunto LONG );
create sequence temasforo_seq
start with 1
increment by 1
nomaxvalue;
create or replace
trigger temasforo_trigger
before insert on temasforo
referencing OLD as old NEW as new
for each row
begin
:new.idtemasforo:=temasforo_seq.nextval;
end;
reference:
http://thenullpointerexceptionx.blogspot.mx/2013/06/llaves-primarias-auto-incrementales-en.html
For completeness, I'll mention that Oracle 12c does support this feature. Also it's supposedly faster than the triggers approach. For example:
CREATE TABLE foo
(
id NUMBER GENERATED BY DEFAULT AS IDENTITY (
START WITH 1 NOCACHE ORDER ) NOT NULL ,
name VARCHAR2 (50)
)
LOGGING ;
ALTER TABLE foo ADD CONSTRAINT foo_PK PRIMARY KEY ( id ) ;
Best approach: Get the next value from sequence
The nicest approach is getting the NEXTVAL from the SEQUENCE "associated" with the table. Since the sequence is not directly associated to any specific table,
we will need to manually refer the corresponding table from the sequence name convention.
The sequence name used on a table, if follow the sequence naming convention, will mention the table name inside its name. Something likes <table_name>_SEQ. You will immediately recognize it the moment you see it.
First, check within Oracle system if there is any sequence "associated" to the table
SELECT * FROM all_sequences
WHERE SEQUENCE_OWNER = '<schema_name>';
will present something like this
Grab that SEQUENCE_NAME and evaluate the NEXTVAL of it in your INSERT query
INSERT INTO workqueue(id, value) VALUES (workqueue_seq.NEXTVAL, 'A new value...')
Additional tip
In case you're unsure if this sequence is actually associated with the table, just quickly compare the LAST_NUMBER of the sequence (meaning the current value) with the maximum id of
that table. It's expected that the LAST_NUMBER is greater than or equals to the current maximum id value in the table, as long as the gap is not too suspiciously large.
SELECT LAST_NUMBER
FROM all_sequences
WHERE SEQUENCE_OWNER = '<schema_name>' AND SEQUENCE_NAME = 'workqueue_seq';
SELECT MAX(ID)
FROM workqueue;
Reference: Oracle CURRVAL and NEXTVAL
Alternative approach: Get the current max id from the table
The alternative approach is getting the max value from the table, please refer to Zsolt Sky answer in this same question
This is a simple way to do it without any triggers or sequences:
insert into WORKQUEUE (ID, facilitycode, workaction, description)
values ((select count(1)+1 from WORKQUEUE), 'J', 'II', 'TESTVALUES');
Note : here need to use count(1) in place of max(id) column
It perfectly works for an empty table also.

Oracle - Modify an existing table to auto-increment a column

I have a table with the following column:
NOTEID NUMBER NOT NULL,
For all intents and purposes, this column is the primary key. This table has a few thousand rows, each with a unique ID. Before, the application would SELECT the MAX() value from the table, add one, then use that as the next value. This is a horrible solution, and is not transaction or thread safe (in fact, before they didn't even have a UNIQUE constraint on the column and I could see the same NOTEID was duplicated in 9 different occasions)..
I'm rather new to Oracle, so I'd like to know the best syntax to ALTER this table and make this column auto-increment instead. If possible, I'd like to make the next value in the sequence be the MAX(NOTEID) + 1 in the table, or just make it 800 or something to start out. Thanks!
You can't alter the table. Oracle doesn't support declarative auto-incrementing columns. You can create a sequence
CREATE SEQUENCE note_seq
START WITH 800
INCREMENT BY 1
CACHE 100;
Then, you can create a trigger
CREATE OR REPLACE TRIGGER populate_note_id
BEFORE INSERT ON note
FOR EACH ROW
BEGIN
:new.note_id := note_seq.nextval;
END;
or, if you want to allow callers to specify a non-default NOTE_ID
CREATE OR REPLACE TRIGGER populate_note_id
BEFORE INSERT ON note
FOR EACH ROW
BEGIN
IF( :new.note_id is null )
THEN
:new.note_id := note_seq.nextval;
END IF;
END;
If your MAX(noteid) is 799, then try:
CREATE SEQUENCE noteseq
START WITH 800
INCREMENT BY 1
Then when inserting a new record, for the NOTEID column, you would do:
noteseq.nextval