How to update multiple tables at the same time in Postgresql? - sql

So I have many tables in a db and I want to add two new columns for them.
For example, I have the columns "created_at" and "modified_at" and I want to create the columns "client_created_at" and "client_modified_at"
and at the same time populate these new columns with the values of "created_at" and "modified_at" of each table.
I imagine and have tried something like this:
ALTER TABLE patients, folders, auscultations, auscultations_notes, folder_ausc_association
ADD COLUMN client_created_at bigint, client_modified_at bigint;
UPDATE patients, folders, auscultations, auscultations_notes, folder_ausc_association
SET client_created_at = created_at, client_modified_at = modified_at
I'm not sure about how to structure it, any help would be appreciated!

In addition to the solution from Laurenz Albe, you could create an anonymous code block to do this job. Such a query can be very handy when you have many tables and don't want to create one statement per table.
DO $$
DECLARE
row record;
BEGIN
FOR row IN SELECT * FROM pg_tables WHERE schemaname = 'public'
LOOP
EXECUTE 'ALTER TABLE public.' || quote_ident(row.tablename) || ' ADD COLUMN client_created_at bigint, ADD COLUMN client_modified_at bigint;';
EXECUTE 'UPDATE ' || quote_ident(row.tablename) || ' SET client_created_at = created_at, client_modified_at = modified_at;';
END LOOP;
END;
$$;
Note: This code block adds the columns you want into all tables in the schema public - use it with care! You can adapt it to the tables you need by changing this query in the block:
SELECT * FROM pg_tables WHERE schemaname = 'public'

You'll have to use a statement per table for each of your two statements.
Define a maintenance window, and then perform for each table:
ALTER TABLE patients
ADD client_created_at bigint, client_modified_at bigint;
UPDATE patients
SET client_created_at = created_at, client_modified_at = modified_at;
ALTER TABLE patients
ALTER client_created_at SET NOT NULL,
ALTER client_created_at DEFAULT extract(epoch FROM current_timestamp),
ALTER client_modified_at SET NOT NULL,
ALTER client_modified_at DEFAULT extract(epoch FROM current_timestamp);
Use a different DEFAULT if you have different needs.

Related

Postgresql - Union table values from different schemas in a database into one table

I have a Database called Knowledge in postgres. It has multiple schemas and every schema has same number of tables, table has same columns as well.
Now I want to create a new schema called Aggregate, table called aggregate.table1 and put values from schema1.table1 and schema2.table1 in it.
I need to add another column in Aggregate.table1 which holds the value representing schema.
If any value in schema1.table1 is updated then aggregate.table1 should get the updated values.
Question,
Is it possible in Postgresql? if so please help me with this.
I need this aggregated table for further processing
You can try writing an anonymous code block to iterate over all schemas and tables, so that you can import your data into the aggregate schema. The following block search for all tables contained in the schemas s1 and s2, creates a corresponding table in the schema s_agg and finally copies its records.
DO $$
DECLARE row record;
BEGIN
FOR row IN SELECT * FROM pg_tables WHERE schemaname IN ('s1','s2') LOOP
EXECUTE 'CREATE TABLE IF NOT EXISTS s_agg.'||quote_ident(row.tablename)||
' AS TABLE ' || quote_ident(row.schemaname)||'.'|| quote_ident(row.tablename) ||
' WITH NO DATA;';
EXECUTE 'INSERT INTO s_agg.' || quote_ident(row.tablename)
|| ' SELECT * FROM '||quote_ident(row.schemaname)||'.'||quote_ident(row.tablename);
END LOOP;
END;
$$;
Demo
CREATE SCHEMA s1;
CREATE SCHEMA s2;
CREATE SCHEMA s_agg;
CREATE TABLE s1.t1 (id int);
INSERT INTO s1.t1 VALUES (1);
CREATE TABLE s2.t1 (id int);
INSERT INTO s2.t1 VALUES (42);
DO $$
DECLARE row record;
BEGIN
FOR row IN SELECT * FROM pg_tables WHERE schemaname IN ('s1','s2') LOOP
EXECUTE 'CREATE TABLE IF NOT EXISTS s_agg.'||quote_ident(row.tablename)||
' AS TABLE ' || quote_ident(row.schemaname)||'.'|| quote_ident(row.tablename) ||
' WITH NO DATA;';
EXECUTE 'INSERT INTO s_agg.' || quote_ident(row.tablename)
|| ' SELECT * FROM '||quote_ident(row.schemaname)||'.'||quote_ident(row.tablename);
END LOOP;
END;
$$;
-- contains values of t1 from s1 and s2
SELECT * FROM s_agg.t1;
id
----
1
42
Note: This code works with the assumption that the aggregate schema is either empty or it has empty tables, otherwise data will be duplicated. If you run this periodically and the size of your tables isn't too large, you can add a DROP TABLE before the CREATE TABLE statement. To make it work on every commit on all tables of all schemas you have to take a look at TRIGGERS or even logical replication.

Finding every record of specific string in database

I have a database with the column "endpointid" in a lot of tables. I am looking for a search function that would find every table containing a specific endpointid in order to write a query to delete that endpoint. I have tried a delete function to delete it from all tables but that is not working properly since a specific endpointid might not be in all tables. I know the following query gives all tables with the column name:
select table_name from all_tab_columns where lower(column_name) like lower('%endpointid%');
How can I extend that query to search for a specific record of endpointid?
Here is an example to delete rows with a specific endpointid value:
CREATE TABLE mytest (
endpointid NUMBER
);
INSERT INTO mytest VALUES ( 1 );
INSERT INTO mytest VALUES ( 2 );
DECLARE
ep NUMBER := 2;
BEGIN
FOR t_rec IN (
SELECT
table_name
FROM
all_tab_columns
WHERE
lower(column_name) LIKE lower('%endpointid%')
) LOOP
EXECUTE IMMEDIATE 'delete from '
|| t_rec.table_name
|| ' where endpointid = :1'
USING ep;
END LOOP;
END;
Note that if these tables have foreign key relationships, this may fail, as it does not take into account the ordering of the table references. If that is needed, then you would need to structure your metatada query to find those relationships.

PostgreSQL truncate existing field and altering to add character limit

I am altering my existing table that is already filled with columns. In my table 'section', I want to set a character limit on the column "name", specifically type VARCHAR(60). However, there has not been a character limit before, and I would like to truncate any existing fields in the name column so that it now matches this restriction before my ALTER script.
I'm still getting several error messages, including in my LEFT statement, which is what I'm using to truncate the string in the "name" column. The LEFT statement is upset how I'm declaring the string to be truncated, whether I put the parameters in parenthesis or not. This is where I'm at so far:
DO $$
DECLARE
_name text;
_id uuid;
BEGIN
FOR _name, _id IN SELECT (name, id) FROM %SCHEMA%.section
LOOP
IF (_name > 60)
THEN
SET name = LEFT (_name, 60) WHERE id = _id;
END IF;
END LOOP;
RETURN NEW;
END $$;
Once I have this done, I know my ALTER script is very simple:
ALTER TABLE IF EXISTS %SCHEMA%.section ALTER COLUMN name TYPE VARCHAR(60);
You can also make use of the the USING syntax to ALTER TABLE. This allows you to do it as part of the ALTER, rather than as two separate commands.
ALTER TABLE myschema.mytable
ALTER COLUMN mycolumn
TYPE VARCHAR(60)
USING LEFT(mycolumn, 60);
https://www.postgresql.org/docs/9.6/static/sql-altertable.html
use an update query, like this:
UPDATE myschema.mytable
SET name = LEFT(mytable.name, 60)
WHERE LENGTH(mytable.name) > 60

Postgres: How to set column default value as another column value while altering the table

I have a postgres table with millions of record in it. Now I want to add new column to that table called "time_modified" with the value in another column "last_event_time". Running a migration script is taking long time , so need a simple solution to run in production.
Assuming that the columns are timestamps you can try:
alter table my_table add time_modified text;
alter table my_table alter time_modified type timestamp using last_event_time;
I suggest use function with pg_sleep, which wait between iteriation in loop
This way don't invoke exclusive lock and others locks on your_table.
SELECT pg_sleep(seconds);
But time of execute is long
alter table my_table add time_modified timestamp;
CREATE OR REPLACE FUNCTION update_mew_column()
RETURNS void AS
$BODY$
DECLARE
rec record;
BEGIN
for rec in (select id,last_event_time from your_table) loop
update your_table set time_modified = rec.last_event_time where id = rec.id;
PERFORM pg_sleep(0.01);
end loop;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
and execute function:
select update_mew_column();

Using a sequence on an existing table

I've got a table that people have been inserting into getting the primary key by doing a
SELECT max(id)+1 from table_a;
I want to add some records to that table using a INSERT INTO table_a SELECT ... FROM table_b, table_c ... simple SQL script, and I'm wondering how to generate the primary keys. My first thought was to create a temporary sequence, but Oracle evidently doesn't have a select setval to set the first value. So how do I get the current value of max(id)+1 to set the "start with" parameter to my sequence?
I found something on-line that I thought would work:
COLUMN S new_value st select max(id)+1 S from table_a;
CREATE SEQUENCE cra_seq start with &st;
But it doesn't actually use st in the CREATE SEQUENCE but instead prompts me to enter it, which isn't what I need.
Is this something like what you want?
1 declare
2 id integer;
3 begin
4 select max(rownum)+1 into id from dual;
5 execute immediate 'create sequence myseq start with '||TO_CHAR(id);
6* end;
7 /
Couldn't you use the row_number function like so:
Insert Destination( Id, ...)
Select row_number() over( order by TableA.Col1... ) + MaxDestination.MaxId + 1 Num
, ....
From TableA, TableB,...
Cross Join ( Select Max(Id) MaxId From Destination ) MaxDestination
You can use the row_number analytical function to generate row numbers (1 through N).
Before you do the insert, get the max id that is in the table, and then add the row number to that max and it will populate your table correctly.
In PostgreSQL:
CREATE SEQUENCE new_seq;
ALTER TABLE existing_table ADD COLUMN new_serial_column bigint DEFAULT 0;
UPDATE existing_table SET new_serial_column = nextval('new_seq');
ALTER TABLE existing_table ALTER COLUMN new_serial_column SET NOT NULL;
ALTER TABLE existing_table ALTER COLUMN new_serial_column SET DEFAULT nextval('new_seq');
Although, that code is not idempotent, so check you havn't already created the new sequence, something like:
CREATE FUNCTION fixup_existing_table() RETURNS void AS $$
DECLARE
new_id bigint;
seq_column_exists integer;
BEGIN
SELECT Count(column_name) INTO seq_column_exists FROM information_schema.columns WHERE table_name='existing_table' and column_name='new_serial_column';
IF seq_column_exists != 0 THEN RETURN; END IF;
CREATE SEQUENCE new_seq;
ALTER TABLE existing_table ADD COLUMN new_serial_column bigint DEFAULT 0;
UPDATE existing_table SET new_serial_column = nextval('new_seq');
ALTER TABLE existing_table ALTER COLUMN new_serial_column SET NOT NULL;
ALTER TABLE existing_table ALTER COLUMN new_serial_column SET DEFAULT nextval('new_seq');
END;
$$ LANGUAGE plpgsql;
Then, you can safely call: SELECT fixup_existing_table() to alter the schema as many times as you like, i.e. call from some dumb update script.