How to do a Cross column unique constraint in SQL (Oracle) - sql

How to have a Unique Constraint in Oracle-DB with two columns so that a duplicate must not occur in one or the other.
Assume this table
|id | A | B |
|---|---|---|
| 1 | 1 | 2 |
| 2 | 3 | 4 |
I that a new row is not allowed to have in column "A" a value that duplicate a value from column "A" or "B".
In the example above: I am allowed to add 5 to column "A" but not 1, 2, 3, or 4.
My idea was to do something like:
CREATE UNIQUE INDEX crossTest ON test (
SELECT t.A AS x FROM test t
UNION ALL
SELECT t.B AS x FROM test t
)
but it does not work because Oracle does not accept this syntax.
The two classic approaches:
have two unique constraints CREATE UNIQUE INDEX uidxA ON test A and CREATE UNIQUE INDEX uidxB ON test B does not work because then I could add 2 and 4 to column "A"
have a unique constraint of two columns CREATE UNIQUE INDEX uidxB ON test (A, B) because this check only existing pairs.
(Bonus question: it should be allowed that "A" and "B" of the same row can be equals)
SQL scripts for the example
CREATE TABLE test (id NUMBER (10) NOT NULL, a VARCHAR2(12), b VARCHAR2(12));
INSERT INTO test (id,a,b) VALUES(1, '1', '2');
INSERT INTO test (id,a,b) VALUES(2, '3', '4');
INSERT INTO test (id,a,b) VALUES(3, '4', 'x'); -> should fail
INSERT INTO test (id,a,b) VALUES(3, '5', 'x'); -> should work

#Tejash's answer gave me an idea to avoid locking or serialization. You can create an auxiliary table duet_index to produce the extended data set with all rows. Then a simple trigger will do the trick, including your bonus question.
For example:
create table duet_index (
n number,
constraint unique uq1 (n)
);
And then the trigger:
create or replace trigger test_trg
before insert on test
for each row
begin
insert into duet_index (n) values (:new.a);
if (:new.a <> :new.b) then
insert into duet_index (n) values (:new.b);
end if;
end;
Please consider I'm not proficient at writing Oracle triggers. The syntax can be wrong, but the idea should fly.

I've been working with Oracle for decades now and I don't recall having such a requirement. It makes me nervous about your data model.
What you want to do cannot be done with a single index. Trigger-based approaches are going to have trouble working correctly in all multi-user cases. A materialized-view approach seems promising.
My suggestion is to create a materialized view that refreshes on commit and that contains a concatenation (UNION ALL) of the column A and column B values.
Here is what I mean (see comments in code for more details):
create table test1 ( id number not null primary key, a number, b number );
insert into test1 values ( 1, 1, 2);
insert into test1 values ( 2, 3, 4);
commit;
-- Create a snapshot to allow us to create a REFRESH FAST ON COMMIT snapshot...
create snapshot log on test1 with primary key, rowid;
-- And create that snapshot... this will be updated when any changes to TEST1 are committed
create materialized view test1_concat
refresh fast on commit
as
select t1.rowid row_id, 1 as marker, t1.a concatenation from test1 t1
union all
select t2.rowid row_id, 2 as marker, t2.b concatenation from test1 t2
-- this next bit allows a = b in single rows (i.e., bonus question)
where t2.a != t2.b;
-- Now, enforce the constraint on our snapshot to prevent cross-column duplicates
create unique index test1_concat_u1 on test1_concat ( concatenation );
-- Test #1 -- column a may equal column b without error (bonus!)
insert into test1 values ( 3, 5, 5);
commit;
-- Test #2 uniqueness enforced
insert into test1 values ( 4, 6, 1);
-- (no error at this point)
commit;
> ORA-12008: error in materialized view refresh path ORA-00001: unique
> constraint (APPS.TEST1_CONCAT_U1) violated
Drawbacks
There is a scalability issue here. Oracle will synchronize on the commit. Every working solution to your problem will have this drawback, I believe
You do not get an error until the transaction tries to commit, at which point it is impossible to correct and recover the transaction. I believe you cannot solve this drawback in any solution without making drawback #1 much worse (i.e., without much more extensive and longer-lasting locks on your table).

I suggest fixing our data model, so the values are in rows rather than columns:
CREATE TABLE test (
id NUMBER (10) NOT NULL,
type varchar2(1) check (type in ('A', 'B'),
value varchar2(12),
unique (value),
unique (id, type)
);
The unique constraint is then easy.

Not possible using INDEX or CONSTRAINT. You need a trigger, something like this:
CREATE OR REPLACE TRIGGER TEST_TRG
BEFORE INSERT ON TEST
FOR EACH ROW
DECLARE
CNT NUMBER := 0;
BEGIN
SELECT COUNT(1) INTO CNT from TEST
WHERE A = :NEW.A OR B = :NEW.A OR A = :NEW.B OR B = :NEW.B;
IF CNT > 0 THEN
raise_application_error(-20111,'This value is not allowed');
END IF;
END;

Related

How to Insert new Record into Table if the Record is not Present in the Table in Teradata

I want to insert a new record if the record is not present in the table
For that I am using below query in Teradata
INSERT INTO sample(id, name) VALUES('12','rao')
WHERE NOT EXISTS (SELECT id FROM sample WHERE id = '12');
When I execute the above query I am getting below error.
WHERE NOT EXISTS
Failure 3706 Syntax error: expected something between ')' and the 'WHERE' keyword.
Can anyone help with the above issue. It will be very helpful.
You can use INSERT INTO ... SELECT ... as follows:
INSERT INTO sample(id,name)
select '12','rao'
WHERE NOT EXISTS (SELECT id FROM sample WHERE id = '12');
You can also create the primary/unique key on id column to avoid inserting duplicate data in id column.
I would advise writing the query as:
INSERT INTO sample (id, name)
SELECT id, name
FROM (SELECT 12 as id, 'rao' as name) x
WHERE NOT EXISTS (SELECT 1 FROM sample s WHERE s.id = x.id);
This means that you do not need to repeat the constant value -- such repetition can be a cause of errors in queries. Note that I removed the single quotes. id looks like a number so treat it as a number.
The uniqueness of ids is usually handled using a unique constraint or index:
alter table sample add constraint unq_sample_id unique (id);
This makes sure that the database ensures uniqueness. Your approach can fail if two inserts are run at the same time with the same id. An attempt to insert a duplicates returns an error (which the exists can then avoid).
In practice, id columns are usually generated automatically by the database. So the create table statement would look more like:
id integer generated by default as identity
And the insert would look like:
insert into sample (name)
values (name);
If id is the Primary Index of the table you can use MERGE:
merge into sample as tgt
using VALUES('12','rao') as src (id, name)
on src.id = tgt.id
when not matched
then insert (src.id,src.name)

Prevent consecutive duplicate values without a trigger

Within a group, I'd like to prevent INSERTs of consecutive duplicate values, where "consecutive" is defined by a simple ORDER BY clause.
Imagine a set of experiments which is regularly sampling values from a sensor. We only want to insert a value if it is new for that experiment.
Note that older values are allowed to be duplicates. So this is allowed:
id experiment value
1 A 10
2 A 20
3 A 10
but this is not:
id experiment value
1 A 10
2 A 10
I know how to find the previous value per experiment:
SELECT
*,
lag(sample_value) OVER experiment_and_id
FROM new_samples
WINDOW experiment_and_id AS (
PARTITION BY experiment
ORDER BY id
);
From the docs I know that CHECK constraints are not allowed to use other rows in their checking:
PostgreSQL does not support CHECK constraints that reference table data other than the new or updated row being checked. While a CHECK constraint that violates this rule may appear to work in simple tests, it cannot guarantee that the database will not reach a state in which the constraint condition is false (due to subsequent changes of the other row(s) involved). This would cause a database dump and reload to fail. The reload could fail even when the complete database state is consistent with the constraint, due to rows not being loaded in an order that will satisfy the constraint. If possible, use UNIQUE, EXCLUDE, or FOREIGN KEY constraints to express cross-row and cross-table restrictions.
If what you desire is a one-time check against other rows at row insertion, rather than a continuously-maintained consistency guarantee, a custom trigger can be used to implement that. (This approach avoids the dump/reload problem because pg_dump does not reinstall triggers until after reloading data, so that the check will not be enforced during a dump/reload.)
The EXCLUDE constraint looks promising, but is primarily for cases where the test is not equality. And I'm not sure if I can include window functions in there.
So I'm left with a custom trigger but this seems like a bit of a hack for what seems like a fairly common use case.
Can anyone improve on using a trigger?
Ideally, I'd like to be able to just say:
INSERT ....
ON CONFLICT DO NOTHING
and have Postgres deal with the rest!
Minimum working example
BEGIN;
CREATE TABLE new_samples (
id INT GENERATED ALWAYS AS IDENTITY,
experiment VARCHAR,
sample_value INT
);
INSERT INTO new_samples(experiment, sample_value)
VALUES
('A', 1),
-- This is fine because they are for different groups
('B', 1),
-- This is fine because the value has changed
('A', 2),
-- This is fine because it's different to the previous value in
-- experiment A.
('A', 1),
-- Two is not allowed here because it's the same as the value
-- before it, within this experiment.
('A', 1);
SELECT
*,
lag(sample_value) OVER experiment_and_id
FROM new_samples
WINDOW experiment_and_id AS (
PARTITION BY experiment
ORDER BY id
);
ROLLBACK;
If the samples will not change, then the restriction cited in the docs will not be relevant to your use case.
You can create a function to accomplish this:
create or replace function check_new_sample(_experiment text, _sample_value int)
returns boolean as
$$
select _sample_value != first_value(sample_value)
over (partition by experiment
order by id desc)
from new_samples
where experiment = _experiment;
$$ language sql;
alter table new_samples add constraint new_samples_ck_repeat
check (check_new_sample(experiment, sample_value));
Example inserts:
insert into new_samples (experiment, sample_value) values ('A', 1);
INSERT 0 1
insert into new_samples (experiment, sample_value) values ('B', 1);
INSERT 0 1
insert into new_samples (experiment, sample_value) values ('A', 2);
INSERT 0 1
insert into new_samples (experiment, sample_value) values ('A', 1);
INSERT 0 1
insert into new_samples (experiment, sample_value) values ('A', 1);
ERROR: new row for relation "new_samples" violates check constraint "new_samples_ck_repeat"
DETAIL: Failing row contains (5, A, 1).

SQLite auto-increment non-primary key field

Is it possible to have a non-primary key to be auto-incremented with every insertion?
For example, I want to have a log, where every log entry has a primary key (for internal use), and a revision number ( a INT value that I want to be auto-incremented).
As a workaround, this could be done with a sequence, yet I believe that sequences are not supported in SQLite.
You can do select max(id)+1 when you do the insertion.
For example:
INSERT INTO Log (id, rev_no, description)
VALUES ((SELECT MAX(id) + 1 FROM log), 'rev_Id', 'some description')
Note that this will fail on an empty table since there won't be a record with id is 0 but you can either add a first dummy entry or change the sql statement to this:
INSERT INTO Log (id, rev_no, description)
VALUES ((SELECT IFNULL(MAX(id), 0) + 1 FROM Log), 'rev_Id', 'some description')
SQLite creates a unique row id (rowid) automatically. This field is usually left out when you use "select * ...", but you can fetch this id by using "select rowid,* ...". Be aware that according to the SQLite documentation, they discourage the use of autoincrement.
create table myTable ( code text, description text );
insert into myTable values ( 'X', 'some descr.' );
select rowid, * from myTable;
:: Result will be;
1|X|some descr.
If you use this id as a foreign key, you can export rowid - AND import the correct value in order to keep data integrity;
insert into myTable values( rowid, code text, description text ) values
( 1894, 'X', 'some descr.' );
You could use a trigger (http://www.sqlite.org/lang_createtrigger.html) that checks the previous highest value and then increments it, or if you are doing your inserts through in a stored procedure, put that same logic in there.
My answer is very similar to Icarus's so I no need to mention it.
You can use Icarus's solution in a more advanced way if needed. Below is an example of seat availiabilty table for a train reservation system.
insert into Availiability (date,trainid,stationid,coach,seatno)
values (
'11-NOV-2013',
12076,
'SRR',
1,
(select max(seatno)+1
from Availiability
where date='11-NOV-2013'
and trainid=12076
and stationid='SRR'
and coach=1)
);
You can use an AFTER INSERT trigger to emulate a sequence in SQLite (but note that numbers might be reused if rows are deleted). This will make your INSERT INTO statement a lot easier.
In the following example, the revision column will be auto-incremented (unless the INSERT INTO statement explicitly provides a value for it, of course):
CREATE TABLE test (
id INTEGER PRIMARY KEY NOT NULL,
revision INTEGER,
description TEXT NOT NULL
);
CREATE TRIGGER auto_increment_trigger
AFTER INSERT ON test
WHEN new.revision IS NULL
BEGIN
UPDATE test
SET revision = (SELECT IFNULL(MAX(revision), 0) + 1 FROM test)
WHERE id = new.id;
END;
Now you can simply insert a new row like this, and the revision column will be auto-incremented:
INSERT INTO test (description) VALUES ('some description');

SQL Insert into 2 tables, passing the new PK from one table as the FK in the other

How can I achieve an insert query on 2 tables that will insert the primary key set from one table as a foreign key into the second table.
Here's a quick example of what I'm trying to do, but I'd like this to be one query, perhaps a join.
INSERT INTO Table1 (col1, col2) VALUES ( val1, val2 )
INSERT INTO Table2 (foreign_key_column) VALUES (parimary_key_from_table1_insert)
I'd like this to be one join query.
I've made some attempts but I can't get this to work correctly.
This is not possible to do with a single query.
The record in the PK table needs to be inserted before the new PK is known and can be used in the FK table, so at least two queries are required (though normally 3, as you need to retrieve the new PK value for use).
The exact syntax depends on the database being used, which you have not specified.
If you need this set of inserts to be atomic, use transactions.
Despite what others have answered, this absolutely is possible, although it takes 2 queries made consecutively with the same connection (to maintain the session state).
Here's the mysql solution (with executable test code below):
INSERT INTO Table1 (col1, col2) VALUES ( val1, val2 );
INSERT INTO Table2 (foreign_key_column) VALUES (LAST_INSERT_ID());
Note: These should be executed using a single connection.
Here's the test code:
create table tab1 (id int auto_increment primary key, note text);
create table tab2 (id int auto_increment primary key, tab2_id int references tab1, note text);
insert into tab1 values (null, 'row 1');
insert into tab2 values (null, LAST_INSERT_ID(), 'row 1');
select * from tab1;
select * from tab2;
mysql> select * from tab1;
+----+-------+
| id | note |
+----+-------+
| 1 | row 1 |
+----+-------+
1 row in set (0.00 sec)
mysql> select * from tab2;
+----+---------+-------+
| id | tab2_id | note |
+----+---------+-------+
| 1 | 1 | row 1 |
+----+---------+-------+
1 row in set (0.00 sec)
From your example, if the tuple (col1, col2) can be considered unique, then you could do:
INSERT INTO table1 (col1, col2) VALUES (val1, val2);
INSERT INTO table2 (foreign_key_column) VALUES (SELECT id FROM Table1 WHERE col1 = val1 AND col2 = val2);
There may be a few ways to accomplish this. Probably the most straight forward is to use a stored procedure that accepts as input all the values you need for both tables, then inserts to the first, retrieves the PK, and inserts to the second.
If your DB supports it, you can also tell the first INSERT to return a value:
INSERT INTO table1 ... RETURNING primary_key;
This at least saves the SELECT step that would otherwise be necessary. If you go with a stored procedure approach, you'll probably want to incorporate this into that stored procedure.
It could also possibly be done with a combination of views and triggers--if supported by your DB. This is probably far messier than it's worth, though. I believe this could be done in PostgreSQL, but I'd still advise against it. You'll need a view that contains all of the columns represented by both table1 and table2, then you need an ON INSERT DO INSTEAD trigger with three parts--the first part inserts to the new table, the second part retrieves the PK from the first table and updates the NEW result, and the third inserts to the second table. (Note: This view doesn't even have to reference the two literal tables, and would never be used for queries--it only has to contain columns with names/data types that match the real tables)
Of course all of these methods are just complicated ways of getting around the fact that you can't really do what you want with a single command.

Swap unique indexed column values in database

I have a database table and one of the fields (not the primary key) is having a unique index on it. Now I want to swap values under this column for two rows. How could this be done? Two hacks I know are:
Delete both rows and re-insert them.
Update rows with some other value
and swap and then update to actual value.
But I don't want to go for these as they do not seem to be the appropriate solution to the problem.
Could anyone help me out?
The magic word is DEFERRABLE here:
DROP TABLE ztable CASCADE;
CREATE TABLE ztable
( id integer NOT NULL PRIMARY KEY
, payload varchar
);
INSERT INTO ztable(id,payload) VALUES (1,'one' ), (2,'two' ), (3,'three' );
SELECT * FROM ztable;
-- This works, because there is no constraint
UPDATE ztable t1
SET payload=t2.payload
FROM ztable t2
WHERE t1.id IN (2,3)
AND t2.id IN (2,3)
AND t1.id <> t2.id
;
SELECT * FROM ztable;
ALTER TABLE ztable ADD CONSTRAINT OMG_WTF UNIQUE (payload)
DEFERRABLE INITIALLY DEFERRED
;
-- This should also work, because the constraint
-- is deferred until "commit time"
UPDATE ztable t1
SET payload=t2.payload
FROM ztable t2
WHERE t1.id IN (2,3)
AND t2.id IN (2,3)
AND t1.id <> t2.id
;
SELECT * FROM ztable;
RESULT:
DROP TABLE
NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "ztable_pkey" for table "ztable"
CREATE TABLE
INSERT 0 3
id | payload
----+---------
1 | one
2 | two
3 | three
(3 rows)
UPDATE 2
id | payload
----+---------
1 | one
2 | three
3 | two
(3 rows)
NOTICE: ALTER TABLE / ADD UNIQUE will create implicit index "omg_wtf" for table "ztable"
ALTER TABLE
UPDATE 2
id | payload
----+---------
1 | one
2 | two
3 | three
(3 rows)
I think you should go for solution 2. There is no 'swap' function in any SQL variant I know of.
If you need to do this regularly, I suggest solution 1, depending on how other parts of the software are using this data. You can have locking issues if you're not careful.
But in short: there is no other solution than the ones you provided.
Further to Andy Irving's answer
this worked for me (on SQL Server 2005) in a similar situation
where I have a composite key and I need to swap a field which is part of the unique constraint.
key: pID, LNUM
rec1: 10, 0
rec2: 10, 1
rec3: 10, 2
and I need to swap LNUM so that the result is
key: pID, LNUM
rec1: 10, 1
rec2: 10, 2
rec3: 10, 0
the SQL needed:
UPDATE DOCDATA
SET LNUM = CASE LNUM
WHEN 0 THEN 1
WHEN 1 THEN 2
WHEN 2 THEN 0
END
WHERE (pID = 10)
AND (LNUM IN (0, 1, 2))
There is another approach that works with SQL Server: use a temp table join to it in your UPDATE statement.
The problem is caused by having two rows with the same value at the same time, but if you update both rows at once (to their new, unique values), there is no constraint violation.
Pseudo-code:
-- setup initial data values:
insert into data_table(id, name) values(1, 'A')
insert into data_table(id, name) values(2, 'B')
-- create temp table that matches live table
select top 0 * into #tmp_data_table from data_table
-- insert records to be swapped
insert into #tmp_data_table(id, name) values(1, 'B')
insert into #tmp_data_table(id, name) values(2, 'A')
-- update both rows at once! No index violations!
update data_table set name = #tmp_data_table.name
from data_table join #tmp_data_table on (data_table.id = #tmp_data_table.id)
Thanks to Rich H for this technique.
- Mark
Assuming you know the PK of the two rows you want to update... This works in SQL Server, can't speak for other products. SQL is (supposed to be) atomic at the statement level:
CREATE TABLE testing
(
cola int NOT NULL,
colb CHAR(1) NOT NULL
);
CREATE UNIQUE INDEX UIX_testing_a ON testing(colb);
INSERT INTO testing VALUES (1, 'b');
INSERT INTO testing VALUES (2, 'a');
SELECT * FROM testing;
UPDATE testing
SET colb = CASE cola WHEN 1 THEN 'a'
WHEN 2 THEN 'b'
END
WHERE cola IN (1,2);
SELECT * FROM testing;
so you will go from:
cola colb
------------
1 b
2 a
to:
cola colb
------------
1 a
2 b
I also think that #2 is the best bet, though I would be sure to wrap it in a transaction in case something goes wrong mid-update.
An alternative (since you asked) to updating the Unique Index values with different values would be to update all of the other values in the rows to that of the other row. Doing this means that you could leave the Unique Index values alone, and in the end, you end up with the data that you want. Be careful though, in case some other table references this table in a Foreign Key relationship, that all of the relationships in the DB remain intact.
I have the same problem. Here's my proposed approach in PostgreSQL. In my case, my unique index is a sequence value, defining an explicit user-order on my rows. The user will shuffle rows around in a web-app, then submit the changes.
I'm planning to add a "before" trigger. In that trigger, whenever my unique index value is updated, I will look to see if any other row already holds my new value. If so, I will give them my old value, and effectively steal the value off them.
I'm hoping that PostgreSQL will allow me to do this shuffle in the before trigger.
I'll post back and let you know my mileage.
In SQL Server, the MERGE statement can update rows that would normally break a UNIQUE KEY/INDEX. (Just tested this because I was curious.)
However, you'd have to use a temp table/variable to supply MERGE w/ the necessary rows.
For Oracle there is an option, DEFERRED, but you have to add it to your constraint.
SET CONSTRAINT emp_no_fk_par DEFERRED;
To defer ALL constraints that are deferrable during the entire session, you can use the ALTER SESSION SET constraints=DEFERRED statement.
Source
I usually think of a value that absolutely no index in my table could have. Usually - for unique column values - it's really easy. For example, for values of column 'position' (information about the order of several elements) it's 0.
Then you can copy value A to a variable, update it with value B and then set value B from your variable. Two queries, I know no better solution though.
Oracle has deferred integrity checking which solves exactly this, but it is not available in either SQL Server or MySQL.
1) switch the ids for name
id student
1 Abbot
2 Doris
3 Emerson
4 Green
5 Jeames
For the sample input, the output is:
id student
1 Doris
2 Abbot
3 Green
4 Emerson
5 Jeames
"in case n number of rows how will manage......"