I would like to set up a table in PostgreSQL such that two columns together must be unique. There can be multiple values of either value, so long as there are not two that share both.
For instance:
CREATE TABLE someTable (
id int PRIMARY KEY AUTOINCREMENT,
col1 int NOT NULL,
col2 int NOT NULL
)
So, col1 and col2 can repeat, but not at the same time. So, this would be allowed (Not including the id)
1 1
1 2
2 1
2 2
but not this:
1 1
1 2
1 1 -- would reject this insert for violating constraints
CREATE TABLE someTable (
id serial PRIMARY KEY,
col1 int NOT NULL,
col2 int NOT NULL,
UNIQUE (col1, col2)
)
autoincrement is not postgresql. You want a integer primary key generated always as identity (or serial if you use PG 9 or lower. serial was soft-deprecated in PG 10).
If col1 and col2 make a unique and can't be null then they make a good primary key:
CREATE TABLE someTable (
col1 int NOT NULL,
col2 int NOT NULL,
PRIMARY KEY (col1, col2)
)
Create unique constraint that two numbers together CANNOT together be repeated:
ALTER TABLE someTable
ADD UNIQUE (col1, col2)
If, like me, you landed here with:
a pre-existing table,
to which you need to add a new column, and
also need to add a new unique constraint on the new column as well as an old one, AND
be able to undo it all (i.e. write a down migration)
Here is what worked for me, utilizing one of the above answers and expanding it:
-- up
ALTER TABLE myoldtable ADD COLUMN newcolumn TEXT;
ALTER TABLE myoldtable ADD CONSTRAINT myoldtable_oldcolumn_newcolumn_key UNIQUE (oldcolumn, newcolumn);
---
ALTER TABLE myoldtable DROP CONSTRAINT myoldtable_oldcolumn_newcolumn_key;
ALTER TABLE myoldtable DROP COLUMN newcolumn;
-- down
Seems like regular UNIQUE CONSTRAINT :)
CREATE TABLE example (
a integer,
b integer,
c integer,
UNIQUE (a, c));
More here
Related
In DB2 on Linux v11.1 I have a table:
COL1 COL2 "COLn 50 more columns"
A A
A A
B A
B B
etc 3 million rows
There can be multiple rows with the same rows, like first two rows in my sample (so obvious there is no primary key on table).
Now I have to add new column ID and set for every row unique sequential number.
The result should be:
COL1 COL2 "COLn 50 more columns" ID
A A 1
A A 2
B A 3
B B 4
etc 3 million rows
How to write such an update statement to update ID column?
Regards
Here is one way to do it, using an identity column , and it assumes that there is not an existing Primary Key or identity column.
alter table myschema.mytab add column id integer not null default 0 ;
alter table myschema.mytab alter column id drop default ;
alter table myschema.mytab alter column id set generated always as identity ;
update myschema.mytab set id = default ;
-- optional, if you want the new ID column to be a surrogate primary key
alter table myschema.mytab add constraint pkey primary key(id) ;
reorg table myschema.mytab ;
runstats on table myschema.mytab with distribution and detailed indexes all;
Try this:
alter table myschema.mytab add column id integer not null default 0 ;
UPDATE (SELECT ID, ROWNUMBER() OVER() RN FROM myschema.mytab) SET ID = RN;
-- Or even simplier:
-- UPDATE myschema.mytab SET ID = ROWNUMBER() OVER();
I need help with the insert statements for a plethora of tables in our DB.
New to SQL - just basic understanding
Summary:
Table1
Col1 Col2 Col3
1 value1 value1
2 value2 value2
3 value3 value3
Table2
Col1 Col2 Col3
4 value1 value1
5 value2 value2
6 value3 value3
Multiple tables use the same sequence of auto-generated primary keys when user creates a static data record from the GUI.
However, creating a script to upload static data from one environment to the other is something I'm looking for.
Example from one of the tables:
Insert into RULE (PK_RULE,NAME,RULEID,DESCRIPTION)
values
(4484319,'TESTRULE',14,'TEST RULE DESCRIPTION')
How do I design my insert statement so that it reads the last value from the PK column (4484319 here) and auto inserts 4484320 without explicitly mentioning the same?
Note: Our DB has hundreds and thousands of records.
I think there's something similar to (SELECT MAX(ID) + 1 FROM MyTable) which could potentially solve my problem but I don't know how to use it.
Multiple tables use the same sequence of auto-generated primary keys when user creates a static data record from the GUI.
Generally, multiple tables sharing a single sequence of primary keys is a poor design choice. Primary keys only need to be unique per table. If they need to be unique globally there are better options such as UUID primary keys.
Instead, one gives each table their own independent sequence of primary keys. In MySQL it's id bigint auto_increment primary key. In Postgres you'd use bigserial. In Oracle 12c it's number generated as identity.
create table users (
id number generated as identity,
name text not null
);
create table things (
id number generated as identity,
description text not null
);
Then you insert into each, leaving off the id, or setting it null. The database will fill it in from each sequence.
insert into users (name) values ('Yarrow Hock'); -- id 1
insert into users (id, name) values (null, 'Reaneu Keeves'); -- id 2
insert into things (description) values ('Some thing'); -- id 1
insert into things (id, description) values (null, 'Shiny stuff'); -- id 2
If your schema is not set up with auto incrementing, sequenced primary keys, you can alter the schema to use them. Just be sure to set each sequence to the maximum ID + 1. This is by far the most sane option in the long run.
If you really must draw from a single source for all primary keys, create a sequence and use that.
create sequence master_seq
start with ...
Then get the next key with nextval.
insert into rule (pk_rule, name, ruleid, description)
values (master_seq.nextval, 'TESTRULE', 14, 'TEST RULE DESCRIPTION')
Such a sequence goes up to 1,000,000,000,000,000,000,000,000,000 which should be plenty.
The INSERT and UPDATE statements in Oracle have a ...RETURNING...INTO... clause on them which can be used to return just-inserted values. When combined with a trigger-and-sequence generated primary key (Oracle 11 and earlier) or an identity column (Oracle 12 and up) this lets you get back the most-recently-inserted/updated value.
For example, let's say that you have a table TABLE1 defined as
CREATE TABLE TABLE1 (ID1 NUMBER
GENERATED ALWAYS AS IDENTITY
PRIMARY KEY,
COL2 NUMBER,
COL3 VARCHAR2(20));
You then define a function which inserts data into TABLE1 and returns the new ID value:
CREATE OR REPLACE FUNCTION INSERT_TABLE1(pCOL2 NUMBER, vCOL3 VARCHAR2)
RETURNS NUMBER
AS
nID NUMBER;
BEGIN
INSERT INTO TABLE1(COL2, COL3) VALUES (pCOL2, vCOL3)
RETURNING ID1 INTO nID;
RETURN nID;
END INSERT_TABLE1;
which gives you an easy way to insert data into TABLE1 and get the new ID value back.
dbfiddle here
I have a table with columns Form, Appraiser, date and Level
A form can never have same appraisers, and a form can never have same Levels.
I tried to make primary key(Form,appraiser) and primary key(Form,level) but it says that form has multiple primary keys
If i put primary key(form,appraiser,level) people can just insert the same form and appraiser twice but just with a different level and that violates my rules.
|Form|Appraiser|Level|
1 A 1
1 B 2
1 C 3
2 A 1
2 B 2
2 C 3
I believe you could use :-
CREATE TABLE IF NOT EXISTS mytable (form INTEGER, appraiser TEXT, level INTEGER, UNIQUE(form,appraiser), UNIQUE(form,level));
e.g. Using the following
DROP TABLE IF EXISTS mytable;
CREATE TABLE IF NOT EXISTS mytable (form INTEGER, appraiser TEXT, level INTEGER, UNIQUE(form,appraiser), UNIQUE(form,level));
INSERT INTO mytable VALUES
(1,'A',1),(1,'B',2),(1,'C',3),
(2,'A',1),(2,'B',2),(2,'C',3)
;
INSERT OR IGNORE INTO mytable VALUES (1,'A',4);
INSERT OR IGNORE INTO mytable values (1,'Z',1);
The results are :-
INSERT INTO mytable VALUES
(1,'A',1),(1,'B',2),(1,'C',3),
(2,'A',1),(2,'B',2),(2,'C',3)
> Affected rows: 6
> Time: 0.083s
all added
but
INSERT OR IGNORE INTO mytable VALUES (1,'A',4)
> Affected rows: 0
> Time: 0s
not added as A has already apprasied form 1.
and also
INSERT OR IGNORE INTO mytable values (1,'Z',1)
> Affected rows: 0
> Time: 0s
not added as for 1 has already been appraised at level 1
We can try using two junction tables here:
CREATE TABLE form_appraiser (
form_id INTEGER NOT NULL,
appraiser_id INTEGER NOT NULL,
PRIMARY KEY (form_id, appraiser_id)
);
CREATE TABLE form_level (
form_id INTEGER NOT NULL,
level_id INTEGER NOT NULL,
PRIMARY KEY (form_id, level_id)
);
Each of these two tables would ensure that a given form can only be associated with a single appraiser or level.
Then, maintain a third table forms containing one record for each unique form. If you have the additional requirement that a given form can only have one appraiser or level, then add a unique constraint on the form, on one/both of the junction tables.
You can try add the unique constraint as a column identifier.
like
appraiser VARCHAR(50) UNIQUE,
form VARCHAR(50) UNIQUE,
level VARCHAR(50) UNIQUE,
In this case non of the values repeats. If you want a combination of values not to repeat you can use
UNIQUE(form, level)
This means you cannot have a form of the same level repeating.
I have a database table like so :
col1 PRI
col2 PRI
col3 PRI
col4 PRI
col5 PRI
col6
col7
col8
So, it looks like all the columns from 1 to 5 need to be unique and that it makes 'sense' to just make those keys primary. Is this the right way of designing or should we just add a new auto generated column with a unique constraint on the 5 columns? We will query with either a subset of those columns (col1 - col3) or all 5 columns
This is fine; I see no need to have a 'generated' column:
PRIMARY KEY(a,b,c,d,e)
If you have this, it will work efficiently:
WHERE b=22 AND c=333 AND a=4444 -- in any order
Most other combinations will be less efficient.
(Please use real column names so we can discuss things in more detail.)
If you set the columns as UNIQUE, will fail because col1 cannot be equal on 2 diferent rows.
But if you set the columns as PRIMARY KEY but not UNIQUE, the database assumes that the combination of all the primary keys must be the 'UNIQUE' value, so col1+col2+col3+col4+col5 cannot be found on any other row.
Hope it helps.
EDIT
Here an example:
create table example (
col1 bigint not null unique,
col2 bigint not null,
primary key (col1,col2));
insert into example values(1,1); ==> Success
insert into example values(1,2); ==> Failure - col1 is unique and '1' was used
insert into example values(2,1); ==> Success - '2' was never used on col1
insert into example values(2,7); ==> Failure - '2' was already used on col1
But if you use instead:
create table example (
col1 bigint not null,
col2 bigint not null,
primary key (col1,col2));
insert into example values(1,1); ==> Success
insert into example values(1,2); ==> Success
insert into example values(2,1); ==> Success
insert into example values(1,2); ==> Failure - '1','2' combination was used
I have the following two tables:
CREATE TABLE test1
(
ID int IDENTITY UNIQUE,
length int not null
)
CREATE TABLE test2
(
ID int IDENTITY UNIQUE,
test1number int references test1(ID),
distance int not null
)
Example: lets insert into test1 values 1 and 100 (ID=1 and length=100). Now lets insert into test2 values 1 as ID and test1number=1 as reference from test1. I want to create a constraint which will not allow to write distance bigger than 100 (length from test1).
Any other way than procedure?
If this is for individual rows, and we don't need to assert some property about all rows with the same test1number values then one way to do it is this:
CREATE TABLE test1
(
ID int IDENTITY UNIQUE,
length int not null,
constraint UQ_test1_Length_XRef UNIQUE (ID,Length)
)
go
CREATE TABLE _test2
(
ID int IDENTITY UNIQUE,
test1number int references test1(ID),
_test1length int not null,
distance int not null,
constraint FK_test2_test1_length_xref foreign key (test1number,_test1length)
references test1 (ID,length) on update cascade,
constraint CK_length_distance CHECK (distance <= _test1length)
)
go
create view test2
as
select ID,test1number,distance from _test2
go
create trigger T_I_t2 on test2
instead of insert
as
insert into _test2(test1number,_test1length,distance)
select test1number,length,distance
from inserted i inner join test1 t on i.test1number = t.id
go
We only need the view and trigger if you're trying to hide the existence of this extra column in the _test2 table from your users.