Comparing UUIDs in postgres - sql

Let's say I create two tables in Postgres with UUID as the PRIMARY KEY. These UUID are generated using the uuid-ossp module in Postgres: https://www.postgresql.org/docs/9.5/static/uuid-ossp.html
CREATE TABLE f(
idFoo UUID PRIMARY KEY DEFAULT gen_random_uuid(),
foo TEXT
);
CREATE TABLE b (
idBar UUID,
bar text,
FOREIGN KEY (idBar) REFERENCES foo(idFoo)
);
I then want to create a VIEW based on the above two tables:
CREATE OR REPLACE VIEW foobar AS (
SELECT fooid, barid
FROM foo, bar
WHERE f.idFoo = b.idBar
-- AND some other condition --
);
Question: How do I compare the UUID types?

Don't compare UUID, you're looking to JOIN on them:
CREATE OR REPLACE VIEW foobar AS (
SELECT f.foo, b.bar, f.id
FROM f JOIN b USING (id)
WHERE -- some other condition --
);
To JOIN on different columns, you can:
CREATE OR REPLACE VIEW foobar AS (
SELECT f.foo, b.bar, idFoo, idBar
FROM f JOIN b ON (idFoo = idBar)
WHERE -- some other condition --
);
(Of course, because idFoo = idBar, there's no need to include both in your second select).

Related

SCOPE for a table of REFs

I am designing an object-relational model with Oracle (18.4.0) and I would like to add a SCOPE constraint to a table type column of an object table. Is it possible? Here a simplified model:
CREATE OR REPLACE TYPE t_cycler AS OBJECT (
name VARCHAR2(50)
);
CREATE TABLE cycler OF t_cycler (
name PRIMARY KEY
);
CREATE OR REPLACE TYPE t_cycler_list IS TABLE OF REF t_cycler;
CREATE OR REPLACE TYPE t_team AS OBJECT (
name VARCHAR2(50),
cyclers t_cycler_list
);
CREATE TABLE team OF t_team (
name PRIMARY KEY
)
NESTED TABLE cyclers STORE AS cyclers_tab;
I need that team.cyclers only contains REFs to objects in cycler. I look into the documentation but unfortunately it does not say a lot about SCOPE constraint, like here:
You can constrain a column type, collection element, or object type
attribute to reference a specified object table. Use the SQL
constraint subclause SCOPE IS when you declare the REF.
But the only example it provides is about a simple column type. I tried specifying SCOPE IS cycler in several ways inside the creation of the team table but with no results.
You want to add the scope to the COLUMN_VALUE pseudo-column of the nested table:
ALTER TABLE cyclers_tab ADD SCOPE FOR ( COLUMN_VALUE ) IS cycler;
If you then do:
INSERT INTO cycler ( name ) VALUES ( 'c1.1' );
INSERT INTO cycler ( name ) VALUES ( 'c1.2' );
INSERT INTO team (
name,
cyclers
) VALUES (
'team1',
t_cycler_list(
( SELECT REF(c) FROM cycler c WHERE name = 'c1.1' ),
( SELECT REF(c) FROM cycler c WHERE name = 'c1.2' )
)
);
Then you can insert the row. But, if you have another table of the same object type:
CREATE TABLE cycler2 OF t_cycler (
name PRIMARY KEY
);
INSERT INTO cycler2 ( name ) VALUES ( 'c2.1' );
And try to do:
INSERT INTO team (
name,
cyclers
) VALUES (
'team2',
t_cycler_list(
( SELECT REF(c) FROM cycler2 c WHERE name = 'c2.1' )
)
);
Then you get the error:
ORA-22889: REF value does not point to scoped table
db<>fiddle here

How to make a table share a sequence with another table in postgresql?

Basically I have a table1 with unique ids of features for a city, now I got a table2 for features for that country.
I need to create new ids for the country table (these need to share the same sequence as the city, so that the ids match when cross referencing tables)
How to make table2 have the same ids as table1 within that city and then new ids for features elsewhere? Essentially sharing the sequence
Edit: the tables are already created, how can i update table2
If you manually create a sequence and assign that as the default value to the ID columns, then it works. But to reuse an existing value that would mean we'd have to create a trigger that assign an existing value or obtains a new from the shared sequence.
create sequence baz;
create table foo(id bigint default nextval('baz'), value text);
create table bar(id bigint default nextval('baz'), value date);
insert into foo (value) values ('Hello');
insert into bar (value) values (now());
insert into foo (value) values ('World');
insert into bar (value) values (now());
select 'foo', id, value::text from foo
union all
select 'bar', id, value::text from bar
And the result is:
foo 1 Hello
bar 2 2018-10-29
foo 3 World
bar 4 2018-10-29
And as a bonus:
drop sequence baz
ERROR: cannot drop sequence baz because other objects depend on it
Detail:
default for table foo column id depends on sequence baz
default for table bar column id depends on sequence baz
Hint: Use DROP ... CASCADE to drop the dependent objects too.
Edit: If we can do post processing then this approach could be usedto assign values for the missing ID columns:
update bar
SET id = coalesce((select id from foo where bar.city_name = foo.city_name),nextval('baz'))
WHERE id is null
if your tables are already created you must create a sequence
create sequence seq_city_country;
and then add the sequence to your ids with the following code
ALTER TABLE city ALTER COLUMN id_city SET DEFAULT nextval('seq_city_country');
ALTER TABLE country ALTER COLUMN id_country SET DEFAULT nextval('seq_city_country');
if your sequence is already created for table city by (sequence_c) you can use
ALTER TABLE country ALTER COLUMN id_country SET DEFAULT nextval('sequence_c');
CREATE SEQUENCE shared_seq;
CREATE TABLE a (
col1 bigint DEFAULT nextval('shared_seq'),
...
);
CREATE TABLE b (
col1 bigint DEFAULT nextval('shared_seq'),
...
);
This doesn't sound like very good (or even possible) database design. Instead, I suggest creating a junction table which relates cities to their respective countries. So, your three tables might look like this:
city (PK id, name, ...)
country (PK id, name, ...)
country_city (city_id, country_id) PK (city_id -> city(id), country_id -> country(id))
With this design, you don't need to worry about the auto increment sequences in the city and country table. Just let Postgres assign those values, and then just maintain the junction table using the correct values.

postgres fast check if attribute combination also exists in another table

I want to check if the same two attribute values exist in two different tables. If the combination from table_a is not existing in table_b it should be inserted into the select statement table. Right now I have the following query, which is working:
CREATE TABLE table_a (
attr_a integer,
attr_b text,
uuid character varying(200),
CONSTRAINT table_a_pkey PRIMARY KEY (uuid)
);
CREATE TABLE table_b (
attr_a integer,
attr_b text,
uuid character varying(200),
CONSTRAINT table_b_pkey PRIMARY KEY (uuid)
);
SELECT * FROM table_a
WHERE (table_a.attr_a::text || table_a.attr_b::text) != ALL(SELECT (table_b.attr_a::text || table_b.attr_a::text) FROM table_b)
However, the execution time is pretty long. So I would like to ask if there is a faster solution to check for that.
Your where clause uses a manipulation of attr_a (casting it to text and concatinating with attr_b), so the index can't be used. Instead of this concatination, why not try a straight-forward exists operator?
SELECT *
FROM table_a a
WHERE NOT EXISTS (SELECT *
FROM table_b b
WHERE a.attr_a = b.attr_a AND
b.attr_b = b.attr_b)

Define foreign key in Postgres to a subset of a target table

Example:
I have:
Table A:
int id
int table_b_id
Table B:
int id
text type
I want to add a constraint check on column table_b_id that will verify that it points only to rows in table B which their type value is 'X'.
I can't change table structure.
I've understood it can be done with 'CHECK' and a postgres functions which will do the specific query but I've saw people recommending to avoid it.
Any inputs on what is the best approach to implement it will be helpful.
What you are referring to is not a FOREIGN KEY, which, in PostgreSQL, refers to a (number of) column(s) in an other table where there is a unique index on that/those column(s), and which may have associated automatic actions when the value(s) of that/those column(s) change (ON UPDATE, ON DELETE).
You are trying to enforce a specific kind of referential integrity, similar to what a FOREIGN KEY does. You can do this with a CHECK clause and a function (because the CHECK clause does not allow sub-queries), you can also do it with table inheritance and range partitioning (refer to a child table which holds only rows where type = 'X'), but it is probably the easiest to do this with a trigger:
CREATE FUNCTION trf_test_type_x() RETURNS trigger AS $$
BEGIN
PERFORM * FROM tableB WHERE id = NEW.table_b_id AND type = 'X';
IF NOT FOUND THEN
-- RAISE NOTICE 'Foreign key violation...';
RETURN NULL;
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE tr_test_type_x
BEFORE INSERT OR UPDATE ON tableA
FOR EACH ROW EXECUTE PROCEDURE trf_test_type_x();
You can create a partial index on tableB to speed things up:
CREATE UNIQUE INDEX idx_type_X ON tableB(id) WHERE type = 'X';
The most elegant solution, in my opinion, is to use inheritance to get a subtyping behavior:
PostgreSQL 9.3 Schema Setup with inheritance:
create table B ( id int primary key );
-- Instead to create a 'type' field, inherit from B for
-- each type with custom properties:
create table B_X ( -- some_data varchar(10 ),
constraint pk primary key (id)
) inherits (B);
-- Sample data:
insert into B_X (id) values ( 1 );
insert into B (id) values ( 2 );
-- Now, instead to reference B, you should reference B_X:
create table A ( id int primary key, B_id int references B_X(id) );
-- Here it is:
insert into A values ( 1, 1 );
--Inserting wrong values will causes violation:
insert into A values ( 2, 2 );
ERROR: insert or update on table "a" violates foreign key constraint "a_b_id_fkey"
Detail: Key (b_id)=(2) is not present in table "b_x".
Retrieving all data from base table:
select * from B
Results:
| id |
|----|
| 2 |
| 1 |
Retrieving data with type:
SELECT p.relname, c.*
FROM B c inner join pg_class p on c.tableoid = p.oid
Results:
| relname | id |
|---------|----|
| b | 2 |
| b_x | 1 |

Moving table columns to new table and referencing as foreign key in PostgreSQL

Suppose we have a DB table with fields
"id", "category", "subcategory", "brand", "name", "description", etc.
What's a good way of creating separate tables for
category, subcategory and brand
and the corresponding columns and rows in the original table becoming foreign key references?
To outline the operations involved:
get all unique values in each column of the original table which should become foreign keys;
create tables for those
create foreign key reference columns in the original table (or a copy)
In this case, the PostgreSQL DB is accessed via Sequel in a Ruby app, so available interfaces are the command line, Sequel, PGAdmin, etc...
The question: how would you do this?
-- Some test data
CREATE TABLE animals
( id SERIAL NOT NULL PRIMARY KEY
, name varchar
, category varchar
, subcategory varchar
);
INSERT INTO animals(name, category, subcategory) VALUES
( 'Chimpanzee' , 'mammals', 'apes' )
,( 'Urang Utang' , 'mammals', 'apes' )
,( 'Homo Sapiens' , 'mammals', 'apes' )
,( 'Mouse' , 'mammals', 'rodents' )
,( 'Rat' , 'mammals', 'rodents' )
;
-- [empty] table to contain the "squeezed out" domain
CREATE TABLE categories
( id SERIAL NOT NULL PRIMARY KEY
, category varchar
, subcategory varchar
, UNIQUE (category,subcategory)
);
-- The original table needs a "link" to the new table
ALTER TABLE animals
ADD column category_id INTEGER -- NOT NULL
REFERENCES categories(id)
;
-- FK constraints are helped a lot by a supportive index.
CREATE INDEX animals_categories_fk ON animals (category_id);
-- Chained query to:
-- * populate the domain table
-- * initialize the FK column in the original table
WITH ins AS (
INSERT INTO categories(category, subcategory)
SELECT DISTINCT a.category, a.subcategory
FROM animals a
RETURNING *
)
UPDATE animals ani
SET category_id = ins.id
FROM ins
WHERE ins.category = ani.category
AND ins.subcategory = ani.subcategory
;
-- Now that we have the FK pointing to the new table,
-- we can drop the redundant columns.
ALTER TABLE animals DROP COLUMN category, DROP COLUMN subcategory;
-- show it to the world
SELECT a.*
, c.category, c.subcategory
FROM animals a
JOIN categories c ON c.id = a.category_id
;
Note: the fragment:
WHERE ins.category = ani.category
AND ins.subcategory = ani.subcategory
will lead to problems if these columns contain NULLs.
It would be better to compare them using
(ins.category,ins.subcategory)
IS NOT DISTINCT FROM
(ani.category,ani.subcategory)
I'm not sure I completely understand your question, if this doesn't seem to answer it, then please leave a comment and possibly improve your question to clarify, but it sounds like you want to do a CREATE TABLE xxx AS. For example:
CREATE TABLE category AS (SELECT DISTINCT(category) AS id FROM parent_table);
Then alter the parent_table to add a foreign key constraint.
ALTER TABLE parent_table ADD CONSTRAINT category_fk FOREIGN KEY (category) REFERENCES category (id);
Repeat this for each table you want to create.
Here is the related documentation:
CREATE TABLE
ALTER TABLE
Note: code and references are for Postgresql 9.4