Delete every inactive data SQL - sql

I need to delete every data from every table in a specific schema that satisfy the condition to be inactive. Active/inactive value is a boolean that every table has this column.
How do I do this?

You can't just do this magically you need to write a query to create your query.
Something like this :
select distinct tablename from information_schema.columns where columnname = 'Active'
would give you a list of tables then just add that into a query that says
delete from <tablename> where active = 0

You can create a function to do that:
CREATE OR REPLACE FUNCTION delete_inactive_records()
RETURNS TEXT AS
$BODY$
DECLARE VTABLES RECORD;
VRESULT TEXT;
VNUM_ROWS INTEGER;
BEGIN
VRESULT = '';
FOR VTABLES IN (
select distinct table_schema || '.' || table_name as table_name
from information_schema.tables t
where exists (
select 1
from information_schema.columns c
where c.table_catalog = t.table_catalog
and c.table_schema = c.table_schema
and c.table_name = t.table_name
and c.column_name = 'active'
and t.table_catalog in ("<your_table_catalog1>", "<your_table_catalog2>", "<etc>")
)
order by 1
)
LOOP
RAISE NOTICE 'DELETING FROM %...', VTABLES.TABLE_NAME;
EXECUTE ('DELETE FROM ' || VTABLES.TABLE_NAME || ' WHERE active = 0');
get diagnostics vNUM_ROWS = ROW_COUNT;
VRESULT = VRESULT || vNUM_ROWS || ' records were deleted from ' || VTABLES.TABLE_NAME || chr(13);
END LOOP;
return VRESULT;
end;
$BODY$
LANGUAGE plpgsql VOLATILE;
And then call select delete_inactive_records(); any time you want to "clean-up" your tables!

Related

dynamic SQL ERROR: column "age" does not exist

postgres 12
I am trying to loop through a table which has schema , table_names and columns
I want to do various things like finding nulls ,row count etc. I failed at the first hurdle trying to update the col records.
table i am using
CREATE TABLE test.table_study (
table_schema text,
table_name text,
column_name text,
records int,
No_Nulls int,
No_Blanks int,
per_pop int
);
I populate the table with some schema names ,tables and columns from information_schema.columns
insert into test.table_study select table_schema, table_name, column_name
from information_schema.columns
where table_schema like '%white'
order by table_schema, table_name, ordinal_position;
I want to populate the rest with a function
function :-
CREATE OR REPLACE PROCEDURE test.insert_data_population()
as $$
declare s record;
declare t record;
declare c record;
BEGIN
FOR s IN SELECT distinct table_schema FROM test.table_study
LOOP
FOR t IN SELECT distinct table_name FROM test.table_study where table_schema = s.table_schema
loop
FOR c IN SELECT column_name FROM test.table_study where table_name = t.table_name
LOOP
execute 'update test.table_study set records = (select count(*) from ' || s.table_schema || '.' || t.table_name || ') where table_study.table_name = '|| t.table_name ||';';
END LOOP;
END LOOP;
END LOOP;
END;
$$
LANGUAGE plpgsql;
I get this error SQL Error [42703]: ERROR: column "age" does not exist. the table age does exist.
when I take out the where clause
execute 'update referralunion.testinsert ti set records = (select count(*) from ' || s.table_schema || '.' || t.table_name || ') ;';
it works, I just cant figure out whats wrong?
Your procedure is structured entirely wrong. What it results in is an attempt to get every column name for every table name in every schema. I would guess results in your column does not exist error. Further is shows procedural thinking. SQL requires think in terms of sets. Below I use basically your query to demonstrate then a revised version which uses a single loop.
-- setup (dropping schema references)
create table table_study (
table_schema text,
table_name text,
column_name text,
records int,
no_nulls int,
no_blanks int,
per_pop int
);
insert into table_study(table_schema, table_name, column_name)
values ('s1','t1','age')
, ('s2','t1','xyz');
-- procedure replacing EXECUTE with Raise Notice.
create or replace procedure insert_data_population()
as $$
declare
s record;
t record;
c record;
line int = 0;
begin
for s in select distinct table_schema from table_study
loop
for t in select distinct table_name from table_study where table_schema = s.table_schema
loop
for c in select column_name from table_study where table_name = t.table_name
loop
line = line+1;
raise notice '%: update table_study set records = (select count(*) from %.% where table_study.table_name = %;'
, line, s.table_schema, t.table_name, c.column_name;
end loop;
end loop;
end loop;
end;
$$
language plpgsql;
Run procedure
do $$
begin
call insert_data_population();
end;
$$;
RESULTS
1: update table_study set records = (select count(*) from s2.t1 where table_study.table_name = age; 2: update table_study set records = (select count(*) from s2.t1 where table_study.table_name = xyz; 3: update table_study set records = (select count(*) from s1.t1 where table_study.table_name = age; 4: update table_study set records = (select count(*) from s1.t1 where table_study.table_name = xyz;
Notice lines 2 and 3. Each references a column name that does not exist in the table. This results from the FOR structure with the same table name in different schema.
Revision for Single Select statement with Single For loop.
create or replace
procedure insert_data_population()
language plpgsql
as $$
declare
s record;
line int = 0;
begin
for s in select distinct table_schema, table_name, column_name from table_study
loop
line = line+1;
raise notice '%: update table_study set records = (select count(*) from %.% where table_study.table_name = %;'
, line, s.table_schema, s.table_name, s.column_name;
end loop;
end;
$$;
do $$
begin
call insert_data_population();
end;
$$;
RESULTS
1: update table_study set records = (select count(*) from s2.t1 where table_study.table_name = xyz;
2: update table_study set records = (select count(*) from s1.t1 where table_study.table_name = age;
Note: In Postgres DECLARE begins a block. It is not necessary to declared each variable. I would actually consider it bad practice. In theory it could require an end for each declare as each could be considered a nested block. Fortunately Postgres does not require this.

Updating table based on JSON inside PostgreSQL function

I am writing a plpgsql function that should update a table based on a provided JSON object. The JSON contains a table representation with all the same columns as the table itself has.
The function currently looks as follows:
CREATE OR REPLACE FUNCTION update (updated json)
BEGIN
/* transfrom json to table */
WITH updated_vals AS (
SELECT
*
FROM
json_populate_recordset(NULL::my_table, updated)
),
/* Retrieve all columns from mytable and also with reference to updated_vals table */
cols AS (
SELECT
string_agg(quote_ident(columns), ',') AS table_cols,
string_agg('updated_vals.' || quote_ident($1), ',') AS updated_cols
FROM
information_schema
WHERE
table_name = 'my_table' -- table name, case sensitive
AND table_schema = 'public' -- schema name, case sensitive
AND column_name <> 'id' -- all columns except id and user_id
AND column_name <> 'user_id'
),
/* Define the table columns separately */
table_cols AS (
SELECT
table_cols
FROM
cols
),
/* Define the updated columns separately */
updated_cols AS (
SELECT
updated_cols
FROM
cols)
/* Execute the update statement */
EXECUTE 'UPDATE my_table'
|| ' SET (' || table_cols::text || ') = (' || updated_cols::text || ') '
|| ' FROM updated_vals '
|| ' WHERE my_table.id = updated_vals.id '
|| ' AND my_table.user_id = updated_vals.user_id';
COMMIT;
END;
I noticed that the combination of the WITH clause combined with the EXECUTE will always trigger the error syntax error at or near EXECUTE, even if those are very simple and straightforward. Is this indeed the case, and if so, what would be an alternative approach to provide the required variables (updated_vals, table_cols and updated_cols) to EXECUTE?
If you have any other improvements on this code I'd be happy to see those for I am very new to sql/plpgsql.
If you wrote table name (my_table) in your function, this means that you will update always only one specified table from JSON data. Because of this, you can write table names and column names in your function manually, not using information_schema. This is the simple and easy way.
For example:
CREATE OR REPLACE FUNCTION rbac.update_users_json(updated json)
RETURNS boolean
LANGUAGE plpgsql
AS $function$
begin
update rbac.users usr
set
username = jsn.username,
first_name = jsn.first_name,
last_name = jsn.last_name
from (
select * from json_populate_recordset(NULL::rbac.users, updated)
) jsn
where jsn.id = usr.id;
return true;
END;
$function$
;
For dynamic tables:
CREATE OR REPLACE FUNCTION rbac.update_users_json_dynamic(updated json)
RETURNS boolean
LANGUAGE plpgsql
AS $function$
declare
f record;
exec_sql text;
sep text;
begin
exec_sql = 'update rbac.users usr set ' || E'\n';
sep = '';
for f in
select clm.column_name
from
information_schema."tables" tbl
inner join
information_schema."columns" clm on
clm.table_name = tbl.table_name
and clm.table_schema = tbl.table_schema
where
tbl.table_schema = 'test'
and tbl.table_name = 'users'
and clm.column_name <> 'id'
loop
exec_sql = exec_sql || sep || f.column_name || ' = ' || 'jsn.' || f.column_name;
sep = ', ' || E'\n';
end loop;
exec_sql = exec_sql || E'\n' || 'from (select * from json_populate_recordset(NULL::rbac.users, ''' ||
updated::text || ''')) jsn ' || E'\n' || 'where jsn.id = usr.id';
execute exec_sql;
return true;
END;
$function$
;

Adding a new column at certain place in Postgres [duplicate]

How to add a new column in a table after the 2nd or 3rd column in the table using postgres?
My code looks as follows
ALTER TABLE n_domains ADD COLUMN contract_nr int after owner_id
No, there's no direct way to do that. And there's a reason for it - every query should list all the fields it needs in whatever order (and format etc) it needs them, thus making the order of the columns in one table insignificant.
If you really need to do that I can think of one workaround:
dump and save the description of the table in question (using pg_dump --schema-only --table=<schema.table> ...)
add the column you want where you want it in the saved definition
rename the table in the saved definition so not to clash with the name of the old table when you attempt to create it
create the new table using this definition
populate the new table with the data from the old table using 'INSERT INTO <new_table> SELECT field1, field2, <default_for_new_field>, field3,... FROM <old_table>';
rename the old table
rename the new table to the original name
eventually drop the old, renamed table after you make sure everything's alright
The order of columns is not irrelevant, putting fixed width columns at the front of the table can optimize the storage layout of your data, it can also make working with your data easier outside of your application code.
PostgreSQL does not support altering the column ordering (see Alter column position on the PostgreSQL wiki); if the table is relatively isolated, your best bet is to recreate the table:
CREATE TABLE foobar_new ( ... );
INSERT INTO foobar_new SELECT ... FROM foobar;
DROP TABLE foobar CASCADE;
ALTER TABLE foobar_new RENAME TO foobar;
If you have a lot of views or constraints defined against the table, you can re-add all the columns after the new column and drop the original columns (see the PostgreSQL wiki for an example).
The real problem here is that it's not done yet. Currently PostgreSQL's logical ordering is the same as the physical ordering. That's problematic because you can't get a different logical ordering, but it's even worse because the table isn't physically packed automatically, so by moving columns you can get different performance characteristics.
Arguing that it's that way by intent in design is pointless. It's somewhat likely to change at some point when an acceptable patch is submitted.
All of that said, is it a good idea to rely on the ordinal positioning of columns, logical or physical? Hell no. In production code you should never be using an implicit ordering or *. Why make the code more brittle than it needs to be? Correctness should always be a higher priority than saving a few keystrokes.
As a work around, you can in fact modify the column ordering by recreating the table, or through the "add and reorder" game
See also,
Column tetris reordering in order to make things more space-efficient
The column order is relevant to me, so I created this function. See if it helps. It works with indexes, primary key, and triggers. Missing Views and Foreign Key and other features are missing.
Example:
SELECT xaddcolumn('table', 'col3 int NOT NULL DEFAULT 0', 'col2');
Source code:
CREATE OR REPLACE FUNCTION xaddcolumn(ptable text, pcol text, pafter text) RETURNS void AS $BODY$
DECLARE
rcol RECORD;
rkey RECORD;
ridx RECORD;
rtgr RECORD;
vsql text;
vkey text;
vidx text;
cidx text;
vtgr text;
ctgr text;
etgr text;
vseq text;
vtype text;
vcols text;
BEGIN
EXECUTE 'CREATE TABLE zzz_' || ptable || ' AS SELECT * FROM ' || ptable;
--colunas
vseq = '';
vcols = '';
vsql = 'CREATE TABLE ' || ptable || '(';
FOR rcol IN SELECT column_name as col, udt_name as coltype, column_default as coldef,
is_nullable as is_null, character_maximum_length as len,
numeric_precision as num_prec, numeric_scale as num_scale
FROM information_schema.columns
WHERE table_name = ptable
ORDER BY ordinal_position
LOOP
vtype = rcol.coltype;
IF (substr(rcol.coldef,1,7) = 'nextval') THEN
vtype = 'serial';
vseq = vseq || 'SELECT setval(''' || ptable || '_' || rcol.col || '_seq'''
|| ', max(' || rcol.col || ')) FROM ' || ptable || ';';
ELSIF (vtype = 'bpchar') THEN
vtype = 'char';
END IF;
vsql = vsql || E'\n' || rcol.col || ' ' || vtype;
IF (vtype in ('varchar', 'char')) THEN
vsql = vsql || '(' || rcol.len || ')';
ELSIF (vtype = 'numeric') THEN
vsql = vsql || '(' || rcol.num_prec || ',' || rcol.num_scale || ')';
END IF;
IF (rcol.is_null = 'NO') THEN
vsql = vsql || ' NOT NULL';
END IF;
IF (rcol.coldef <> '' AND vtype <> 'serial') THEN
vsql = vsql || ' DEFAULT ' || rcol.coldef;
END IF;
vsql = vsql || E',';
vcols = vcols || rcol.col || ',';
--
IF (rcol.col = pafter) THEN
vsql = vsql || E'\n' || pcol || ',';
END IF;
END LOOP;
vcols = substr(vcols,1,length(vcols)-1);
--keys
vkey = '';
FOR rkey IN SELECT constraint_name as name, column_name as col
FROM information_schema.key_column_usage
WHERE table_name = ptable
LOOP
IF (vkey = '') THEN
vkey = E'\nCONSTRAINT ' || rkey.name || ' PRIMARY KEY (';
END IF;
vkey = vkey || rkey.col || ',';
END LOOP;
IF (vkey <> '') THEN
vsql = vsql || substr(vkey,1,length(vkey)-1) || ') ';
END IF;
vsql = substr(vsql,1,length(vsql)-1) || ') WITHOUT OIDS';
--index
vidx = '';
cidx = '';
FOR ridx IN SELECT s.indexrelname as nome, a.attname as col
FROM pg_index i LEFT JOIN pg_class c ON c.oid = i.indrelid
LEFT JOIN pg_attribute a ON a.attrelid = c.oid AND a.attnum = ANY(i.indkey)
LEFT JOIN pg_stat_user_indexes s USING (indexrelid)
WHERE c.relname = ptable AND i.indisunique != 't' AND i.indisprimary != 't'
ORDER BY s.indexrelname
LOOP
IF (ridx.nome <> cidx) THEN
IF (vidx <> '') THEN
vidx = substr(vidx,1,length(vidx)-1) || ');';
END IF;
cidx = ridx.nome;
vidx = vidx || E'\nCREATE INDEX ' || cidx || ' ON ' || ptable || ' (';
END IF;
vidx = vidx || ridx.col || ',';
END LOOP;
IF (vidx <> '') THEN
vidx = substr(vidx,1,length(vidx)-1) || ')';
END IF;
--trigger
vtgr = '';
ctgr = '';
etgr = '';
FOR rtgr IN SELECT trigger_name as nome, event_manipulation as eve,
action_statement as act, condition_timing as cond
FROM information_schema.triggers
WHERE event_object_table = ptable
LOOP
IF (rtgr.nome <> ctgr) THEN
IF (vtgr <> '') THEN
vtgr = replace(vtgr, '_#eve_', substr(etgr,1,length(etgr)-3));
END IF;
etgr = '';
ctgr = rtgr.nome;
vtgr = vtgr || 'CREATE TRIGGER ' || ctgr || ' ' || rtgr.cond || ' _#eve_ '
|| 'ON ' || ptable || ' FOR EACH ROW ' || rtgr.act || ';';
END IF;
etgr = etgr || rtgr.eve || ' OR ';
END LOOP;
IF (vtgr <> '') THEN
vtgr = replace(vtgr, '_#eve_', substr(etgr,1,length(etgr)-3));
END IF;
--exclui velha e cria nova
EXECUTE 'DROP TABLE ' || ptable;
IF (EXISTS (SELECT sequence_name FROM information_schema.sequences
WHERE sequence_name = ptable||'_id_seq'))
THEN
EXECUTE 'DROP SEQUENCE '||ptable||'_id_seq';
END IF;
EXECUTE vsql;
--dados na nova
EXECUTE 'INSERT INTO ' || ptable || '(' || vcols || ')' ||
E'\nSELECT ' || vcols || ' FROM zzz_' || ptable;
EXECUTE vseq;
EXECUTE vidx;
EXECUTE vtgr;
EXECUTE 'DROP TABLE zzz_' || ptable;
END;
$BODY$ LANGUAGE plpgsql VOLATILE COST 100;
#Jeremy Gustie's solution above almost works, but will do the wrong thing if the ordinals are off (or fail altogether if the re-ordered ordinals make incompatible types match). Give it a try:
CREATE TABLE test1 (one varchar, two varchar, three varchar);
CREATE TABLE test2 (three varchar, two varchar, one varchar);
INSERT INTO test1 (one, two, three) VALUES ('one', 'two', 'three');
INSERT INTO test2 SELECT * FROM test1;
SELECT * FROM test2;
The results show the problem:
testdb=> select * from test2;
three | two | one
-------+-----+-------
one | two | three
(1 row)
You can remedy this by specifying the column names in the insert:
INSERT INTO test2 (one, two, three) SELECT * FROM test1;
That gives you what you really want:
testdb=> select * from test2;
three | two | one
-------+-----+-----
three | two | one
(1 row)
The problem comes when you have legacy that doesn't do this, as I indicated above in my comment on peufeu's reply.
Update: It occurred to me that you can do the same thing with the column names in the INSERT clause by specifying the column names in the SELECT clause. You just have to reorder them to match the ordinals in the target table:
INSERT INTO test2 SELECT three, two, one FROM test1;
And you can of course do both to be very explicit:
INSERT INTO test2 (one, two, three) SELECT one, two, three FROM test1;
That gives you the same results as above, with the column values properly matched.
The order of the columns is totally irrelevant in relational databases
Yes.
For instance if you use Python, you would do :
cursor.execute( "SELECT id, name FROM users" )
for id, name in cursor:
print id, name
Or you would do :
cursor.execute( "SELECT * FROM users" )
for row in cursor:
print row['id'], row['name']
But no sane person would ever use positional results like this :
cursor.execute( "SELECT * FROM users" )
for id, name in cursor:
print id, name
Well, it's a visual goody for DBA's and can be implemented to the engine with minor performance loss. Add a column order table to pg_catalog or where it's suited best. Keep it in memory and use it before certain queries. Why overthink such a small eye candy.
# Milen A. Radev
The irrelevant need from having a set order of columns is not always defined by the query that pulls them. In the values from pg_fetch_row does not include the associated column name and therefore would require the columns to be defined by the SQL statement.
A simple select * from would require innate knowledge of the table structure, and would sometimes cause issues if the order of the columns were to change.
Using pg_fetch_assoc is a more reliable method as you can reference the column names, and therefore use a simple select * from.

How to declare a number variable where I can save th count of table in my loop

I work wirh oracle Database. I have a plsql code where i run a query in a loop for multiple tables. so, table name is a variable in my code. I would like to have another variable (a single number) that I can call inside the loop and every time it counts the total rows of each table for me
declare
Cursor C_TABLE is
select trim(table_name) as table_name
from all_tables
where table_name in ('T1', 'T2', 'T3');
V_ROWNUM number;
begin
for m in C_TABLE
loop
for i in ( select column_name
from (
select c.column_name
from all_tab_columns c
where c.table_name = m.table_name
and c.owner = 'owner1'
)
)
loop
--I have this:
execute immediate ' insert into MY-table value (select ' || i.column_name || ' from ' || m.table_name || ')';
--I want this but it does not work of course:
V_ROWNUM := execute immediate 'select count(*) from ' || m.table_name;
execute immediate ' insert into MY-table value (select ' || i.column_name || ', ' || V_ROWNUM || ' from ' || m.table_name || ')';
end loop;
end loop;
end;
/
I count not use the "insert into" because I am not selecting from 1 table but the table I want to select from changes every round.
There are three things wrong with your dynamic SQL.
EXECUTE IMMEDIATE is not a function: the proper syntax is execute immediate '<<query>>' into <<variable>>.
An INSERT statement takes a VALUES clause or a SELECT but not both. SELECT would be very wrong in this case. Also note that it's VALUES not VALUE.
COLUMN_NAME is a string literal in the dynamic SQL so it needs to be in quotes. But because the SQL statement is itself a string, quotes in dynamic strings need to be escaped so it should be `'''||column_name||'''.
So the corrected version will look something like this
declare
Cursor C_TABLE is
select trim(table_name) as table_name
from all_tables
where table_name in ('T1', 'T2', 'T3');
V_ROWNUM number;
begin
for m in C_TABLE
loop
for i in ( select column_name
from (
select c.column_name
from all_tab_columns c
where c.table_name = m.table_name
and c.owner = 'owner1'
)
)
loop
execute immediate 'select count(*) from ' || m.table_name into V_ROWNUM;
execute immediate 'insert into MY_table values ( ''' || i.column_name || ''', ' || V_ROWNUM || ')';
end loop;
end loop;
end;
/
Dynamic SQL is hard because it turns compilation errors into runtime errors. It is good practice to write the statements first as static SQL. Once you have got the basic syntax right you can convert it into dynamic SQL.
you can't assign the result of execute immediate to a variable. it is not a function.
but you can do it by using the into_clause e.g.
execute immediate 'select count(*) from ' || m.table_name into V_ROWNUM ;

Updating a column matching a specific pattern in all table in an oracle database

I need to update a column matching a specific pattern in all tables in an oracle database.
For example I have in all tables this column *_CID with is a foreign key to master table witch has a primary key CID
Thanks
You can use the naming convention and query all_tab_columns
declare
cursor c is
select table_owner , column_name, table_name from all_tab_columns where column_name like '%_CID';
begin
for x in c loop
execute immediate 'update ' || x.table_owner || '.' || x.table_name ||' set ' || x.column_name||' = 0';
end loop;
end;
If you have valid Fk's you can also use all_tab_constraints the fetch enabled FK's for your main table and fetch the columns name of the r_constraint_name.
I found a solution to my question:
BEGIN
FOR x IN (SELECT owner, table_name, column_name FROM all_tab_columns) LOOP
EXECUTE IMMEDIATE 'update ' || x.owner || '.' || x.table_name ||' set ' || x.column_name||' = 0 where '||x.column_name||' = 1';
END LOOP;
END;
thanks