One of my clients insists that I create a unique identifier that starts with a given prefix, then increments by one in a postgres table. For example, PREFIX000001.
I know postgres provides SERIAL, BIGSERIAL and UUID to uniquely identify rows in a table. But the client just does not listen. He wants it his way.
Here is the sample table (Excel representation) - something like unique_id column that auto generates on every INSERT command:
I really want to know whether this is technically possible in postgres.
How should I go about this?
You could create a SERIAL or BIGSERIAL like you suggested but represent it with a string when reporting the data in the application (if the client would accept that):
SELECT to_char(id, '"PREFIX0"FM0000000') AS unique_id, product_name, product_desc FROM table;
For example:
SELECT to_char(123, '"PREFIX0"FM0000000') AS unique_id;
unique_id
----------------
PREFIX00000123
(1 row)
Time: 2.704 ms
Otherwise you would have to do this:
CREATE SEQUENCE my_prefixed_seq;
CREATE TABLE my_table (
unique_id TEXT NOT NULL DEFAULT 'PREFIX'||to_char(nextval('my_prefixed_seq'::regclass), 'FM0000000'),
product_name text,
product_desc text
);
INSERT INTO my_table (product_name) VALUES ('Product 1');
INSERT INTO my_table (product_name) VALUES ('Product 2');
INSERT INTO my_table (product_name) VALUES ('Product 3');
->
SELECT * FROM my_table;
unique_id | product_name | product_desc
---------------+--------------+--------------
PREFIX0000004 | Product 1 | {NULL}
PREFIX0000005 | Product 2 | {NULL}
PREFIX0000006 | Product 3 | {NULL}
(3 rows)
Time: 3.595 ms
I would advice you to try to make the client reconsider but it looks like you already tried that route
To whomever reads this in the future, please don't do this to your database, this is not good practice as #Beki acknowledged in his question
As Gab says that's a pretty cumbersome thing to do. If you also want to keep a normal primary key for internal use in your app, here's a solution:
CREATE OR REPLACE FUNCTION add_prefix(INTEGER) RETURNS text AS
$$ select 'PREFIX'||to_char($1, 'FM0000000'); $$
LANGUAGE sql immutable;
CREATE TABLE my_table (
id SERIAL PRIMARY KEY,
unique_id TEXT UNIQUE NOT NULL GENERATED ALWAYS AS
(add_prefix(id)) STORED,
product_name text
);
INSERT INTO my_table (product_name) VALUES ('Product 1');
INSERT INTO my_table (product_name) VALUES ('Product 2');
INSERT INTO my_table (product_name) VALUES ('Product 3');
select * from my_table;
id | unique_id | product_name
----+---------------+--------------
1 | PREFIX0000001 | Product 1
2 | PREFIX0000002 | Product 2
3 | PREFIX0000003 | Product 3
Sure, you get an extra index gobbling up RAM and disk space for nothing. But, when the client then inevitably asks you "I want to update the unique identifier" in a few months...
Or even worse, "why are there holes in the sequence can't you make it so there are no holes"...
...then you won't have to update ALL the relations in all the tables...
One method uses a sequence:
create sequence t_seq;
create table t (
unique_id varchar(255) default ('PREFIX' || lpad(nextval('t_seq')::text, 6, '0'));
)
Here is a db<>fiddle.
Postgres database
I'm trying to find a faster way to create a new column in a table which is a copy of the tables primary key column, so if I have the following columns in a table named students:
student_id Integer Auto-Increment -- Primary key
name varchar
Then I would like to create a new column named old_student_id which has all the same values as student_id.
To do this I create the column and the execute the following update statement
update student set old_student_id=student_id
Which works, but on my biggest table it takes over an hour, and I feels like I should be able to use some kind of alternative approach to get that down to a few minutes, I just don't know what.
So what I want at the end of the day is something that looks like this:
+------------+-----+---------------+
| student_id | name| old_student_id|
+------------+-----+---------------+
| 1 | bob | 1 |
+------------+-----+---------------+
| 2 | tod | 2 |
+------------+-----+---------------+
| 3 | joe | 3 |
+------------+-----+---------------+
| 4 | tim | 4 |
+------------+-----+---------------+
To speed things up a bit before I do the update query, I drop all the FK's and Indices on the table, then reapply them when it finishes. Also I'm on an AWS RDS, so I have setup a param group which has synchronized_commits=false, turned off backups, and increased working mem a bit for the duration of this update.
For context this is actually happening to every table in the database, across three databases. The old ids are used as references for several external systems which reference these ids, so I need to keep track of them in order to update those systems as well. I have an 8 hour downtime window, and currently merging the databases takes ~3 hours, and a whole hour of that time is spent creating these ids.
If in the future you do not need to update old_student_id column then you can use virtual columns on PostgreSQL.
CREATE TABLE table2 (
id serial4 NOT NULL,
val1 int4 NULL,
val2 int4 NULL,
total int4 NULL GENERATED ALWAYS AS (id) STORED
);
During the inserting process, the total field will be set to the same value as the id field. But you can not update this field, because this is a virtual column.
Alternative method is a using triggers. In this case you can update your fields. See this example:
Firstly we need create trigger function which will be called before table inserting.
CREATE OR REPLACE FUNCTION table2_insert()
RETURNS trigger
LANGUAGE plpgsql
AS $function$
begin
new.total = new.val1 * new.val2;
return new;
END;
$function$
;
After then:
CREATE TABLE table2 (
id serial4 NOT NULL,
val1 int4 NULL,
val2 int4 NULL,
total int4 NULL
);
create trigger my_trigger before
insert
on
table2 for each row execute function table2_insert();
With both methods, you don't have to update many records every time.
I am trying to create a table with an auto-increment column as below. Since Redshift psql doesn't support SERIAL, I had to use IDENTITY data type:
IDENTITY(seed, step)
Clause that specifies that the column is an IDENTITY column. An IDENTITY column contains unique auto-generated values. These values start with the value specified as seed and increment by the number specified as step. The data type for an IDENTITY column must be either INT or BIGINT.`
My create table statement looks like this:
CREATE TABLE my_table(
id INT IDENTITY(1,1),
name CHARACTER VARYING(255) NOT NULL,
PRIMARY KEY( id )
);
However, when I tried to insert data into my_table, rows increment only on the even number, like below:
id | name |
----+------+
2 | anna |
4 | tom |
6 | adam |
8 | bob |
10 | rob |
My insert statements look like below:
INSERT INTO my_table ( name )
VALUES ( 'anna' ), ('tom') , ('adam') , ('bob') , ('rob' );
I am also having trouble with bringing the id column back to start with 1. There are solutions for SERIAL data type, but I haven't seen any documentation for IDENTITY.
Any suggestions would be much appreciated!
You have to set your identity as follows:
id INT IDENTITY(0,1)
Source: http://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_TABLE_examples.html
And you can't reset the id to 0. You will have to drop the table and create it back again.
Set your seed value to 1 and your step value to 1.
Create table
CREATE table my_table(
id bigint identity(1, 1),
name varchar(100),
primary key(id));
Insert rows
INSERT INTO organization ( name )
VALUES ('anna'), ('tom') , ('adam'), ('bob'), ('rob');
Results
id | name |
----+------+
1 | anna |
2 | tom |
3 | adam |
4 | bob |
5 | rob |
For some reason, if you set your seed value to 0 and your step value to 1 then the integer will increase in steps of 2.
Create table
CREATE table my_table(
id bigint identity(0, 1),
name varchar(100),
primary key(id));
Insert rows
INSERT INTO organization ( name )
VALUES ('anna'), ('tom') , ('adam'), ('bob'), ('rob');
Results
id | name |
----+------+
0 | anna |
2 | tom |
4 | adam |
6 | bob |
8 | rob |
This issue is discussed at length in AWS forum.
https://forums.aws.amazon.com/message.jspa?messageID=623201
The answer from the AWS.
Short answer to your question is seed and step are only honored if you
disable both parallelism and the COMPUPDATE option in your COPY.
Parallelism is disabled if and only if you're loading your data from a
single file, which is what we normally do not recommend, and hence
will be an unlikely scenario for most users.
Parallelism impacts things because in order to ensure that there is no
single point of contention in assigning identity values to rows, there
end up being gaps in the value assignment. When parallelism is
disabled, the load is happening serially, and therefore, there is no
issue with assigning different id values in parallel.
The reason COMPUPDATE impacts things is when it's enabled, the COPY is
actually making 2 passes over your data. During the first pass, it
internally increments the identity values, and as a result, your
initial value starts with a larger value than you'd expect.
We'll update the doc to reflect this.
Also multiple nodes seems to cause such effect with IDENTITY column. In essence it can only provide you with guaranteed unique IDs.
Colleges have different ways of organizing their departments. Some schools go School -> Term -> Department. Others have steps in between, with the longest being School -> Sub_Campus -> Program -> Term -> Division -> Department.
School, Term, and Department are the only ones that always exist in a school's "tree" of departments. The order of these categories never changes, with the second example I gave you being the longest. Every step down is a 1:N relationship.
Now, I'm not sure how to set up the relationships between the tables. For example, what columns are in Term? Its parent could be a Program, Sub_Campus, or School. Which one it is depends on the school's system. I could conceive of setting up the Term table to have foreign keys for all of those (which all would default to NULL), but I'm not sure this is the canonical way of doing things here.
I suggest you better use a general table, called e.g. Entity which would contain id field and a self-referencing parent field.
Each relevant table would contain a field pointing to Entity's id (1:1). In a way each table would be a child of the Entity table.
Here's one design possibility:
This option takes advantage of your special constraints. Basically you generalize all hierarchies as that of the longest form by introducing generic nodes. If school doesn't have "sub campus" then just assign it a generic sub campus called "Main". For example, School -> Term -> Department can be thought of same as School -> Sub_Campus = Main -> Program=Main -> Term -> Division=Main -> Department. In this case, we assign a node called "Main" as default when school doesn't have that nodes. Now you can just have a boolean flag property for these generic nodes that indicates that they are just placeholders and this flag would allow you to filter it out in middle layer or in UX if needed.
This design will allow you to take advantage of all relational constraints as usual and simplify handling of missing node types in your code.
-- Enforcing a taxonomy by self-referential (recursive) tables.
-- Both the classes and the instances have a recursive structure.
-- The taxonomy is enforced mostly based on constraints on the classes,
-- the instances only need to check that {their_class , parents_class}
-- form a valid pair.
--
DROP schema school CASCADE;
CREATE schema school;
CREATE TABLE school.category
( id INTEGER NOT NULL PRIMARY KEY
, category_name VARCHAR
);
INSERT INTO school.category(id, category_name) VALUES
( 1, 'School' )
, ( 2, 'Sub_campus' )
, ( 3, 'Program' )
, ( 4, 'Term' )
, ( 5, 'Division' )
, ( 6, 'Department' )
;
-- This table contains a list of all allowable {child->parent} pairs.
-- As a convention, the "roots" of the trees point to themselves.
-- (this also avoids a NULL FK)
CREATE TABLE school.category_valid_parent
( category_id INTEGER NOT NULL REFERENCES school.category (id)
, parent_category_id INTEGER NOT NULL REFERENCES school.category (id)
);
ALTER TABLE school.category_valid_parent
ADD PRIMARY KEY (category_id, parent_category_id)
;
INSERT INTO school.category_valid_parent(category_id, parent_category_id)
VALUES
( 1,1) -- school -> school
, (2,1) -- subcampus -> school
, (3,1) -- program -> school
, (3,2) -- program -> subcampus
, (4,1) -- term -> school
, (4,2) -- term -> subcampus
, (4,3) -- term -> program
, (5,4) -- division --> term
, (6,4) -- department --> term
, (6,5) -- department --> division
;
CREATE TABLE school.instance
( id INTEGER NOT NULL PRIMARY KEY
, category_id INTEGER NOT NULL REFERENCES school.category (id)
, parent_id INTEGER NOT NULL REFERENCES school.instance (id)
-- NOTE: parent_category_id is logically redundant
-- , but needed to maintain the constraint
-- (without referencing a third table)
, parent_category_id INTEGER NOT NULL REFERENCES school.category (id)
, instance_name VARCHAR
); -- Forbid illegal combinations of {parent_id, parent_category_id}
ALTER TABLE school.instance ADD CONSTRAINT valid_cat UNIQUE (id,category_id);
ALTER TABLE school.instance
ADD FOREIGN KEY (parent_id, parent_category_id)
REFERENCES school.instance(id, category_id);
;
-- Forbid illegal combinations of {category_id, parent_category_id}
ALTER TABLE school.instance
ADD FOREIGN KEY (category_id, parent_category_id)
REFERENCES school.category_valid_parent(category_id, parent_category_id);
;
INSERT INTO school.instance(id, category_id
, parent_id, parent_category_id
, instance_name) VALUES
-- Zulo
(1,1,1,1, 'University of Utrecht' )
, (2,2,1,1, 'Uithof' )
, (3,3,2,2, 'Life sciences' )
, (4,4,3,3, 'Bacherlor' )
, (5,5,4,4, 'Biology' )
, (6,6,5,5, 'Evolutionary Biology' )
, (7,6,5,5, 'Botany' )
-- Nulo
, (11,1,11,1, 'Hogeschool Utrecht' )
, (12,4,11,1, 'Journalistiek' )
, (13,6,12,4, 'Begrijpend Lezen' )
, (14,6,12,4, 'Typvaardigheid' )
;
-- try to insert an invalid instance
INSERT INTO school.instance(id, category_id
, parent_id, parent_category_id
, instance_name) VALUES
( 15, 6, 3,3, 'Procreation' );
WITH RECURSIVE re AS (
SELECT i0.parent_id AS pa_id
, i0.parent_category_id AS pa_cat
, i0.id AS my_id
, i0.category_id AS my_cat
FROM school.instance i0
WHERE i0.parent_id = i0.id
UNION
SELECT i1.parent_id AS pa_id
, i1.parent_category_id AS pa_cat
, i1.id AS my_id
, i1.category_id AS my_cat
FROM school.instance i1
, re
WHERE re.my_id = i1.parent_id
)
SELECT re.*
, ca.category_name
, ins.instance_name
FROM re
JOIN school.category ca ON (re.my_cat = ca.id)
JOIN school.instance ins ON (re.my_id = ins.id)
-- WHERE re.my_id = 14
;
The output:
INSERT 0 11
ERROR: insert or update on table "instance" violates foreign key constraint "instance_category_id_fkey1"
DETAIL: Key (category_id, parent_category_id)=(6, 3) is not present in table "category_valid_parent".
pa_id | pa_cat | my_id | my_cat | category_name | instance_name
-------+--------+-------+--------+---------------+-----------------------
1 | 1 | 1 | 1 | School | University of Utrecht
11 | 1 | 11 | 1 | School | Hogeschool Utrecht
1 | 1 | 2 | 2 | Sub_campus | Uithof
11 | 1 | 12 | 4 | Term | Journalistiek
2 | 2 | 3 | 3 | Program | Life sciences
12 | 4 | 13 | 6 | Department | Begrijpend Lezen
12 | 4 | 14 | 6 | Department | Typvaardigheid
3 | 3 | 4 | 4 | Term | Bacherlor
4 | 4 | 5 | 5 | Division | Biology
5 | 5 | 6 | 6 | Department | Evolutionary Biology
5 | 5 | 7 | 6 | Department | Botany
(11 rows)
BTW: I left out the attributes. I propose they could be hooked to the relevant categories by means of a EAV type of data model.
I'm going to start by discussing implementing a single hierarchical model (just 1:N relationships) relationally.
Let's use your example School -> Term -> Department.
Here's code that I generated using MySQLWorkbench (I removed a few things to make it clearer):
-- -----------------------------------------------------
-- Table `mydb`.`school`
-- -----------------------------------------------------
-- each of these tables would have more attributes in a real implementation
-- using varchar(50)'s for PKs because I can -- :)
CREATE TABLE IF NOT EXISTS `mydb`.`school` (
`school_name` VARCHAR(50) NOT NULL ,
PRIMARY KEY (`school_name`)
);
-- -----------------------------------------------------
-- Table `mydb`.`term`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `mydb`.`term` (
`term_name` VARCHAR(50) NOT NULL ,
`school_name` VARCHAR(50) NOT NULL ,
PRIMARY KEY (`term_name`, `school_name`) ,
FOREIGN KEY (`school_name` )
REFERENCES `mydb`.`school` (`school_name` )
);
-- -----------------------------------------------------
-- Table `mydb`.`department`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `mydb`.`department` (
`dept_name` VARCHAR(50) NOT NULL ,
`term_name` VARCHAR(50) NOT NULL ,
`school_name` VARCHAR(50) NOT NULL ,
PRIMARY KEY (`dept_name`, `term_name`, `school_name`) ,
FOREIGN KEY (`term_name` , `school_name` )
REFERENCES `mydb`.`term` (`term_name` , `school_name` )
);
Here is the MySQLWorkbench version of the data model:
As you can see, school, at the top of the hierarchy, has only school_name as its key, whereas department has a three-part key including the keys of all of its parents.
Key points of this solution
uses natural keys -- but could be refactored to use surrogate keys (SO question -- along with UNIQUE constraints on multi-column foreign keys)
every level of nesting adds one column to the key
each table's PK is the entire PK of the table above it, plus an additional column specific to that table
Now for the second part of your question.
My interpretation of the question
There is a hierarchical data model. However, some applications require all of the tables, whereas others utilize only some of the tables, skipping the others. We want to be able to implement 1 single data model and use it for both of these cases.
You could use the solution given above, and, as ShitalShah mentioned, add a default value to any table which would not be used. Let's see some example data, using the model given above, where we only want to save School and Department information (no Terms):
+-------------+
| school_name |
+-------------+
| hogwarts |
| uCollege |
| uMatt |
+-------------+
3 rows in set (0.00 sec)
+-----------+-------------+
| term_name | school_name |
+-----------+-------------+
| default | hogwarts |
| default | uCollege |
| default | uMatt |
+-----------+-------------+
3 rows in set (0.00 sec)
+-------------------------------+-----------+-------------+
| dept_name | term_name | school_name |
+-------------------------------+-----------+-------------+
| defense against the dark arts | default | hogwarts |
| potions | default | hogwarts |
| basket-weaving | default | uCollege |
| history of magic | default | uMatt |
| science | default | uMatt |
+-------------------------------+-----------+-------------+
5 rows in set (0.00 sec)
Key points
there is a default value in term for every value in school -- this could be quite annoying if you had a table deep in the hierarchy that an application didn't need
since the table schema doesn't change, the same queries can be used
queries are easy to write and portable
SO seems to think default should be colored differently
There is another solution to storing trees in databases. Bill Karwin discusses it here, starting around slide 49, but I don't think this is the solution you want. Karwin's solution is for trees of any size, whereas your examples seem to be relatively static. Also, his solutions come with their own set of problems (but doesn't everything?).
I hope that helps with your question.
For the general problem of fitting hierarchical data in a relational database, the common solutions are adjacency lists (parent-child links like your example) and nested sets. As noted in the wikipedia article, Oracle's Tropashko propsed an alternative nested interval solution but it's still fairly obscure.
The best choice for your situation depends on how you will be querying the structure, and which DB you are using. Cherry picking the article:
Queries using nested sets can be expected to be faster than queries
using a stored procedure to traverse an adjacency list, and so are the
faster option for databases which lack native recursive query
constructs, such as MySQL
However:
Nested set are very slow for inserts because it requires updating lft
and rgt for all records in the table after the insert. This can cause
a lot of database thrash as many rows are rewritten and indexes
rebuilt.
Again, depending on how your structure will be queried, you may choose a NoSQL style denormalized Department table, with nullable foreign keys to all possible parents, avoiding recursive queries altogether.
I would develop this in a very flexible manner and what seems to mean to be the simplest as well:
There should only be one table, lets call it the category_nodes:
-- possible content, of this could be stored in another table and create a
-- 1:N -> category:content relationship
drop table if exists category_nodes;
create table category_nodes (
category_node_id int(11) default null auto_increment,
parent_id int(11) not null default 1,
name varchar(256),
primary key(category_node_id)
);
-- set the first 2 records:
insert into category_nodes (parent_id, name) values( -1, 'root' );
insert into category_nodes (parent_id, name) values( -1, 'uncategorized' );
So each record in the table has a unique id, a parent id, and a name.
Now after the first 2 inserts: in category_nodes where the category_node_id is 0 is the root node (the parent of all nodes no matter how many degres away. The second is just for a little helper, set an uncategorized node at the category_node_id = 1 which is also the defalt value of parent_id when inserting into the table.
Now imagining the root categories are School, Term, and Dept you would:
insert into category_nodes ( parent_id, name ) values ( 0, 'School' );
insert into category_nodes ( parent_id, name ) values ( 0, 'Term' );
insert into category_nodes ( parent_id, name ) values ( 0, 'Dept' );
Then to get all the root categories:
select * from category_nodes where parent_id = 0;
Now imagining a more complex schema:
-- School -> Division -> Department
-- CatX -> CatY
insert into category_nodes ( parent_id, name ) values ( 0, 'School' ); -- imaging gets pkey = 2
insert into category_nodes ( parent_id, name ) values ( 2, 'Division' ); -- imaging gets pkey = 3
insert into category_nodes ( parent_id, name ) values ( 3, 'Dept' );
--
insert into category_nodes ( parent_id, name ) values ( 0, 'CatX' ); -- 5
insert into category_nodes ( parent_id, name ) values ( 5, 'CatY' );
Now to get all the subcategories of School for example:
select * from category_nodes where parent_id = 2;
-- or even
select * from category_nodes where parent_id in ( select category_node_id from category_nodes
where name = 'School'
);
And so on. Thanks to a default = 1 with the parent_id, inserting into the 'uncategorized' category become simple:
<?php
$name = 'New cat name';
mysql_query( "insert into category_nodes ( name ) values ( '$name' )" );
Cheers