Updating foreign keys while inserting into new table - sql

I have table A(id).
I need to
create table B(id)
add a foreign key to table A that references to B.id
for every row in A, insert a row in B and update A.b_id with the newly inserted row in B
Is it possible to do it without adding a temporary column in B that refers to A? The below does work, but I'd rather not have to make a temporary column.
alter table B add column ref_id integer references(A.id);
insert into B (ref_id) select id from A;
update A set b_id = B.id from B where B.ref_id = A.id;
alter table B drop column ref_id;

Assuming that:
1) you're using postgresql 9.1
2) B.id is a serial (so actually an int with a default value of nextval('b_id_seq')
3) when inserting to B, you actually add other fields from A otherwise the insert is useless
...I think something like this would work:
with n as (select nextval('b_id_seq') as newbid,a.id as a_id from a),
l as (insert into b(id) select newbid from n returning id as b_id)
update a set b_id=l.b_id from l,n where a.id=n.a_id and l.b_id=n.newbid;

Add the future foreign key column, but without the constraint itself:
ALTER TABLE A ADD b_id integer;
Fill the new column with values:
WITH cte AS (
SELECT
id
ROW_NUMBER() OVER (ORDER BY id) AS b_ref
FROM A
)
UPDATE A
SET b_id = cte.b_ref
FROM cte
WHERE A.id = cte.id;
Create the other table:
CREATE TABLE B (
id integer CONSTRAINT PK_B PRIMARY KEY
);
Add rows to the new table using the referencing column of the existing one:
INSERT INTO B (id)
SELECT b_id
FROM A;
Add the FOREIGN KEY constraint:
ALTER TABLE A
ADD CONSTRAINT FK_A_B FOREIGN KEY (b_id) REFERENCES B (id);

PostgeSQL dialect.
You might use an anonymous code block like this
do $$
declare
category_cursor cursor for select id from schema1.categories;
r_category bigint;
setting_id bigint;
begin
open category_cursor;
loop fetch category_cursor into r_category;
exit when not found;
insert into schema2.setting(field)
values ('field_value') returning id into setting_id;
update schema1.categories set category_setting_id = setting_id
where category_id = r_category;
end loop;
end; $$
Let assume we have two tables first - categories, second - settings which must be applied to these categories.
First step - declare cursor(collect ids from categories), and variabels where we store temporary data
Loop cursor inserting values 'field_value' into settings
Store id in variable setting_id
Update table categories with setting_id

Related

How to check for clustered unique key while inserting in SQL table

I am trying to insert rows from another database table to new database table getting the below error if there is no where condition in the query.
Violation of UNIQUE KEY constraint 'NK_LkupxViolations'. Cannot insert duplicate key in object 'dbo.LkupxViolation'. The duplicate key value is (00000000-0000-0000-0000-000000000000, (Not Specified)).
Then I wrote the below query adding where conditions it worked but it didn't insert the expected no. of rows.
IF EXISTS(SELECT 1 FROM sys.tables WHERE name = 'LkupxViolation')
BEGIN
INSERT INTO dbo.[LkupxViolation] SELECT * FROM [DMO_DB].[dbo].[LkupxViolation] where CGRootId not in (select CGRootId from dbo.[LkupxViolation])
and Name not in (select name from dbo.[LkupxViolation])
END
ELSE
PRINT 'LkupxViolation table does not exist'
The unique key in the table is created as:
CONSTRAINT [NK_LkupxViolations] UNIQUE CLUSTERED
(
[CGRootId] ASC,
[Name] ASC
)
Try using NOT EXISTS:
INSERT INTO dbo.[LkupxViolation]
SELECT *
FROM [DMO_DB].[dbo].[LkupxViolation] remove_l
WHERE NOT EXISTS (SELECT 1
FROM dbo.[LkupxViolation] local_l
WHERE local_l.Name = remote_l.Name AND
local_l.CGRootId = remote_l.CGRootId
);
This checks for both values in the same row. In addition, NOT IN is not NULL-safe. If any values generated by the subquery are NULL then all rows are filtered out.

How to update table with sequentional on table without primary key?

In DB2 on Linux v11.1 I have a table:
COL1 COL2 "COLn 50 more columns"
A A
A A
B A
B B
etc 3 million rows
There can be multiple rows with the same rows, like first two rows in my sample (so obvious there is no primary key on table).
Now I have to add new column ID and set for every row unique sequential number.
The result should be:
COL1 COL2 "COLn 50 more columns" ID
A A 1
A A 2
B A 3
B B 4
etc 3 million rows
How to write such an update statement to update ID column?
Regards
Here is one way to do it, using an identity column , and it assumes that there is not an existing Primary Key or identity column.
alter table myschema.mytab add column id integer not null default 0 ;
alter table myschema.mytab alter column id drop default ;
alter table myschema.mytab alter column id set generated always as identity ;
update myschema.mytab set id = default ;
-- optional, if you want the new ID column to be a surrogate primary key
alter table myschema.mytab add constraint pkey primary key(id) ;
reorg table myschema.mytab ;
runstats on table myschema.mytab with distribution and detailed indexes all;
Try this:
alter table myschema.mytab add column id integer not null default 0 ;
UPDATE (SELECT ID, ROWNUMBER() OVER() RN FROM myschema.mytab) SET ID = RN;
-- Or even simplier:
-- UPDATE myschema.mytab SET ID = ROWNUMBER() OVER();

SELECT on same table AFTER INSERT TRIGGER fails?

I am writing a small AFTER INSERT trigger being executed after insert of a row in a table A.
So simplified we´ve got these three tables:
CREATE TABLE A (Id INTEGER, datum DATE, ...);
CREATE TABLE B (Id INTEGER, ...);
CREATE TABLE C (Id INTEGER, Aref INTEGER, Bref INTEGER, FOREIGN KEY(Aref) REFERENCES A(Id), FOREIGN KEY(Bref) REFERENCES B(Id));
So now the tricky thing is, that the rows out of table B should only be selected, if they are EITHER not referenced at all in table C OR, if they are referenced in table C, the datum field in the row of A also being referenced in this row of C IS NOT NULL.
You´re getting it?
So I tried my luck with a select clause looking the following:
SELECT * FROM B b WHERE NOT EXISTS(SELECT c.* FROM C c, A a WHERE c.Bref = b.Id AND c.Aref = A.Id and a.datum IS NULL);
So now the trigger is throwing an exception, because "the table is modified at the moment and the trigger could probably not see it" cited the error message. But I need the information of table A and I need it to be an AFTER INSERT Trigger cause the same trigger is using this rows for a calculation itself inserting rows in a table D with reference to this row of A.
So now the question is, how is it possible to select this rows of A after insert and make sure it keeps consistent?

Update each row of a table with the corresponding value

I have two Postgres tables:
create table A(
id_A serial not null,
column_A varchar null;
...);
create table B(
id_B serial not null,
id_A int4 not null,
name varchar null,
keywords varchar null,
...);
An element of table A is associated to multiple elements of table B and an element of table B is associated to one element of table A.
The column keywords in table B is a concatenation of values of columns B.name and A.column_A:
B.keywords := B.name || A.column_A
How to update with a trigger the column B.keywords of each row in table B if the value of A.column_A is updated?
In other words, I want to do something like this (pseudo-code):
FOR EACH ROW current_row IN TABLE B
UPDATE B SET keywords = (SELECT B.name || A.column_A
FROM B INNER JOIN A ON B.id_A = A.id_A
WHERE B.id_B = current_row.id_B)
WHERE id_B = current_row.id_B;
Your trigger has to call a function when A is updated:
CREATE OR REPLACE FUNCTION update_b()
RETURNS TRIGGER
AS $$
BEGIN
UPDATE B
SET keywords = name || NEW.column_A
WHERE id_A = NEW.id_A;
return NEW;
END
$$ LANGUAGE plpgsql;
CREATE TRIGGER update_b_trigger AFTER UPDATE OF column_A
ON A
FOR EACH ROW
EXECUTE PROCEDURE update_b();
It might also be useful to add a trigger BEFORE INSERT OR UPDATE on table B to set the keywords.
Your approach is broken by design. Do not try to keep derived values current in the table. That's not safe for concurrent access. All kinds of complications can arise. You bloat table B (and backups) and impair write performance.
Instead, use a VIEW (or a MATERIALIZED VIEW):
CREATE VIEW ab AS
SELECT B.*, concat_ws(', ', B.name, A.column_A) AS keywords
FROM B
LEFT JOIN A USING (id_A);
With the updated table definition below referential integrity is guaranteed and you can use [INNER] JOIN instead of LEFT [OUTER] JOIN.
Or even a simple query might be enough ...
Either way, you need a PRIMARY KEY constraint in table A and a FOREIGN KEY constraint in table B:
CREATE TABLE A (
id_A serial PRIMARY KEY,
column_A varchar
...);
CREATE TABLE B (
id_B serial PRIMARY KEY,
id_A int4 NOT NULL REFERENCES A(id_A),
name varchar
-- and *no* redundant "keywords" column!
...);
About concatenating strings:
How to concatenate columns in a Postgres SELECT?
And I wouldn't use CaMeL-case identifiers:
Are PostgreSQL column names case-sensitive?

Define foreign key in Postgres to a subset of a target table

Example:
I have:
Table A:
int id
int table_b_id
Table B:
int id
text type
I want to add a constraint check on column table_b_id that will verify that it points only to rows in table B which their type value is 'X'.
I can't change table structure.
I've understood it can be done with 'CHECK' and a postgres functions which will do the specific query but I've saw people recommending to avoid it.
Any inputs on what is the best approach to implement it will be helpful.
What you are referring to is not a FOREIGN KEY, which, in PostgreSQL, refers to a (number of) column(s) in an other table where there is a unique index on that/those column(s), and which may have associated automatic actions when the value(s) of that/those column(s) change (ON UPDATE, ON DELETE).
You are trying to enforce a specific kind of referential integrity, similar to what a FOREIGN KEY does. You can do this with a CHECK clause and a function (because the CHECK clause does not allow sub-queries), you can also do it with table inheritance and range partitioning (refer to a child table which holds only rows where type = 'X'), but it is probably the easiest to do this with a trigger:
CREATE FUNCTION trf_test_type_x() RETURNS trigger AS $$
BEGIN
PERFORM * FROM tableB WHERE id = NEW.table_b_id AND type = 'X';
IF NOT FOUND THEN
-- RAISE NOTICE 'Foreign key violation...';
RETURN NULL;
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE tr_test_type_x
BEFORE INSERT OR UPDATE ON tableA
FOR EACH ROW EXECUTE PROCEDURE trf_test_type_x();
You can create a partial index on tableB to speed things up:
CREATE UNIQUE INDEX idx_type_X ON tableB(id) WHERE type = 'X';
The most elegant solution, in my opinion, is to use inheritance to get a subtyping behavior:
PostgreSQL 9.3 Schema Setup with inheritance:
create table B ( id int primary key );
-- Instead to create a 'type' field, inherit from B for
-- each type with custom properties:
create table B_X ( -- some_data varchar(10 ),
constraint pk primary key (id)
) inherits (B);
-- Sample data:
insert into B_X (id) values ( 1 );
insert into B (id) values ( 2 );
-- Now, instead to reference B, you should reference B_X:
create table A ( id int primary key, B_id int references B_X(id) );
-- Here it is:
insert into A values ( 1, 1 );
--Inserting wrong values will causes violation:
insert into A values ( 2, 2 );
ERROR: insert or update on table "a" violates foreign key constraint "a_b_id_fkey"
Detail: Key (b_id)=(2) is not present in table "b_x".
Retrieving all data from base table:
select * from B
Results:
| id |
|----|
| 2 |
| 1 |
Retrieving data with type:
SELECT p.relname, c.*
FROM B c inner join pg_class p on c.tableoid = p.oid
Results:
| relname | id |
|---------|----|
| b | 2 |
| b_x | 1 |