postgres: find all integer columns with its current max value in it - sql

How to find all integer typed primary key columns with its current max value in it from all tables from all databases in Postgres instance?
I want to find all the int typed primary key columns from all tables which are nearing to overflow its max value 2147483647.

CREATE OR REPLACE FUNCTION intpkmax() RETURNS
TABLE(schema_name name, table_name name, column_name name, max_value integer)
LANGUAGE plpgsql STABLE AS
$$BEGIN
/* loop through tables with a simgle integer column as primary key */
FOR schema_name, table_name, column_name IN
SELECT sch.nspname, tab.relname, col.attname
FROM pg_class tab
JOIN pg_constraint con ON con.conrelid = tab.oid
JOIN pg_attribute col ON col.attrelid = tab.oid
JOIN pg_namespace sch ON sch.oid = tab.relnamespace
WHERE con.contype = 'p'
AND array_length(con.conkey, 1) = 1
AND col.atttypid = 'integer'::regtype
AND NOT col.attisdropped
LOOP
/* get the maximum value of the primary key column */
EXECUTE 'SELECT max(' || quote_ident(column_name) ||
') FROM ' || quote_ident(schema_name) ||
'.' || quote_ident(table_name) || ''
INTO max_value;
/* return the next result */
RETURN NEXT;
END LOOP;
END;$$;
Then you can get a list with
SELECT * FROM intpkmax();

Related

how to list all columns name in a table contains null value in oracle [duplicate]

I need to know what columns of one table have only null values. I understand that I should do a loop in user_tab_columns. But how detect only columns with null value?
Thanks and sorry for my English
To perform a query where you don't know the column identifies in advance, you need to use dynamic SQL. Assuming you already know the table is not empty, you could do something like:
declare
l_count pls_integer;
begin
for r in (
select table_name, column_name
from user_tab_columns
where table_name = 'T42'
and nullable = 'Y'
)
loop
execute immediate 'select count(*) '
|| ' from "' || r.table_name || '"'
|| ' where "' || r.column_name || '" is not null'
into l_count;
if l_count = 0 then
dbms_output.put_line('Table ' || r.table_name
|| ' column ' || r.column_name || ' only has nulls');
end if;
end loop;
end;
/
Remember to set serveroutput on or your client's equivalent before executing.
The cursor gets the columns from the table which are declared as nullable (if they aren't, not much point checking them; though this won't catch explicit check constraints). For each column it builds a query to count the rows where that column is not null. If that count is zero then it didn't find any that are not null, therefore they all are. Again, assuming you know the table isn't empty before you start.
I've included the table name in the cursor select list and references so you only need to change the name in one place to search a different table, or you could use a variable for that name. Or check multiple tables at once by changing that filter.
You may get better performance by selecting a dummy value from any non-null row, with a rownum stop check - which means it will stop as soon as it finds a non-null value, rather than having to check every row to get an actual count:
declare
l_flag pls_integer;
begin
for r in (
select table_name, column_name
from user_tab_columns
where table_name = 'T42'
and nullable = 'Y'
)
loop
begin -- inner block to allow exception trapping within loop
execute immediate 'select 42 '
|| ' from "' || r.table_name || '"'
|| ' where "' || r.column_name || '" is not null'
|| ' and rownum < 2'
into l_flag;
-- if this foudn anything there is a non-null value
exception
when no_data_found then
dbms_output.put_line('Table ' || r.table_name
|| ' column ' || r.column_name || ' only has nulls');
end;
end loop;
end;
/
or you could do something similar with an exists() check.
If you don't know that the table has data then you can do a simple count(*) from the table before the loop to check if it is empty, and report that instead:
...
begin
if l_count = 0 then
dbms_output.put_line('Table is empty');
return;
end if;
...
Or you could combine it with the cursor query, but this would need some work if you wanted to check multiple tables at once as it would stop as soon as it found any empty one (have to leave you something to do... *8-)
declare
l_count_any pls_integer;
l_count_not_null pls_integer;
begin
for r in (
select table_name, column_name
from user_tab_columns
where table_name = 'T42'
and nullable = 'Y'
)
loop
execute immediate 'select count(*),'
|| ' count(case when "' || r.column_name || '" is not null then 1 end)'
|| ' from "' || r.table_name || '"'
into l_count_any, l_count_not_null;
if l_count_any = 0 then
dbms_output.put_line('Table ' || r.table_name || ' is empty');
exit; -- only report once
elsif l_count_not_null = 0 then
dbms_output.put_line('Table ' || r.table_name
|| ' column ' || r.column_name || ' only has nulls');
end if;
end loop;
end;
/
You could of course populate a collection or make it a pipelined function or whatever if you didn't want to reply on dbms_output, but I assume this is a one-off check so it is probably acceptable.
You can loop though your columns and count null rows. If it is same with your table count, then that column has only null values.
The first question is: one column with zero row could be regarded as only (null) value containing column. But it can remain your decision: the scripts below provide solutions to both ways. (In my opinion: no. The empty columns is not a column with only (null) value)
If you want to know the (null) values about one table you can it with count(column):
select count(column) from table
and when the count(column) = 0 then the column has only (null) value or has no value. (So, you cannot make a correct decision).
E.g. The following three tables (x, y and z) has the following columns:
select * from x;
N_X M_X
---------------
100 (null)
200 (null)
300 (null)
select * from y;
N_Y M_Y
---------------
101 (null)
202 (null)
303 apple
select * from z;
N_Z M_Z
---------------
The count() selects:
select count(n_x), count(m_x) from x;
COUNT(N_X) COUNT(M_X)
-----------------------
3 0
select count(n_y), count(m_y) from y;
COUNT(N_Y) COUNT(M_Y)
-----------------------
3 1
select count(n_z), count(m_Z) from z;
COUNT(N_Z) COUNT(M_Z)
-----------------------
0 0
As you can see, the difference between x and y is appears but you cannot decide that the table z has no rows or only full of (null) values.
The general solution:
I have separeted the schema and the db level but the basic idea is common:
Schema level: the current user’s table
DB level: all users or a chosen schema
The number of (null) in one columns:
all_tab_columns.num_nulls
(Or: user_tab_columns, num_nulls).
And we need the num_rows of the table:
all_all_tables.num_rows
(Or: user_all_tables.num_rows)
Where the num_nulls equals to num_rows there are only (null) values.
Firstly, you need to run the DBMS_STATS for refresh the statistics.
on database level:
exec DBMS_STATS.GATHER_DATABASE_STATS;
(it can use a lot of resources)
on schema level:
EXEC DBMS_STATS.gather_schema_stats('TRANEE',DBMS_STATS.AUTO_SAMPLE_SIZE); (owner = tranee)
-- column with zero row = column has only (null) values -> exclude num_nulls > 0 condition
-- column with zero row <> column has only (null) values -> include num_nulls > 0 condition
the scripts:
-- 1. current user
select
a.table_name,
a.column_name,
a.num_nulls,
b.num_rows
from user_tab_columns a, user_all_tables b
where a.table_name = b.table_name
and num_nulls = num_rows
and num_nulls > 0;
-- 2. chosen user / all user -> exclude the a.owner = 'TRANEE' condition
select
a.owner,
a.table_name,
a.column_name,
a.num_nulls,
b.num_rows
from all_tab_columns a, all_all_tables b
where a.owner = b.owner
and a.table_name = b.table_name
and a.owner = 'TRANEE'
and num_nulls = num_rows
and num_nulls > 0;
TABLE_NAME COLUMN_NAME NUM_NULLS NUM_ROWS
----------------------------------------------------
LEADERS COMM 4 4
EMP_ACTION ACTION 12 12
X M_X 3 3
These tables and columns have only (null) values in tranee schema.

Updating table based on JSON inside PostgreSQL function

I am writing a plpgsql function that should update a table based on a provided JSON object. The JSON contains a table representation with all the same columns as the table itself has.
The function currently looks as follows:
CREATE OR REPLACE FUNCTION update (updated json)
BEGIN
/* transfrom json to table */
WITH updated_vals AS (
SELECT
*
FROM
json_populate_recordset(NULL::my_table, updated)
),
/* Retrieve all columns from mytable and also with reference to updated_vals table */
cols AS (
SELECT
string_agg(quote_ident(columns), ',') AS table_cols,
string_agg('updated_vals.' || quote_ident($1), ',') AS updated_cols
FROM
information_schema
WHERE
table_name = 'my_table' -- table name, case sensitive
AND table_schema = 'public' -- schema name, case sensitive
AND column_name <> 'id' -- all columns except id and user_id
AND column_name <> 'user_id'
),
/* Define the table columns separately */
table_cols AS (
SELECT
table_cols
FROM
cols
),
/* Define the updated columns separately */
updated_cols AS (
SELECT
updated_cols
FROM
cols)
/* Execute the update statement */
EXECUTE 'UPDATE my_table'
|| ' SET (' || table_cols::text || ') = (' || updated_cols::text || ') '
|| ' FROM updated_vals '
|| ' WHERE my_table.id = updated_vals.id '
|| ' AND my_table.user_id = updated_vals.user_id';
COMMIT;
END;
I noticed that the combination of the WITH clause combined with the EXECUTE will always trigger the error syntax error at or near EXECUTE, even if those are very simple and straightforward. Is this indeed the case, and if so, what would be an alternative approach to provide the required variables (updated_vals, table_cols and updated_cols) to EXECUTE?
If you have any other improvements on this code I'd be happy to see those for I am very new to sql/plpgsql.
If you wrote table name (my_table) in your function, this means that you will update always only one specified table from JSON data. Because of this, you can write table names and column names in your function manually, not using information_schema. This is the simple and easy way.
For example:
CREATE OR REPLACE FUNCTION rbac.update_users_json(updated json)
RETURNS boolean
LANGUAGE plpgsql
AS $function$
begin
update rbac.users usr
set
username = jsn.username,
first_name = jsn.first_name,
last_name = jsn.last_name
from (
select * from json_populate_recordset(NULL::rbac.users, updated)
) jsn
where jsn.id = usr.id;
return true;
END;
$function$
;
For dynamic tables:
CREATE OR REPLACE FUNCTION rbac.update_users_json_dynamic(updated json)
RETURNS boolean
LANGUAGE plpgsql
AS $function$
declare
f record;
exec_sql text;
sep text;
begin
exec_sql = 'update rbac.users usr set ' || E'\n';
sep = '';
for f in
select clm.column_name
from
information_schema."tables" tbl
inner join
information_schema."columns" clm on
clm.table_name = tbl.table_name
and clm.table_schema = tbl.table_schema
where
tbl.table_schema = 'test'
and tbl.table_name = 'users'
and clm.column_name <> 'id'
loop
exec_sql = exec_sql || sep || f.column_name || ' = ' || 'jsn.' || f.column_name;
sep = ', ' || E'\n';
end loop;
exec_sql = exec_sql || E'\n' || 'from (select * from json_populate_recordset(NULL::rbac.users, ''' ||
updated::text || ''')) jsn ' || E'\n' || 'where jsn.id = usr.id';
execute exec_sql;
return true;
END;
$function$
;

Adding a new column at certain place in Postgres [duplicate]

How to add a new column in a table after the 2nd or 3rd column in the table using postgres?
My code looks as follows
ALTER TABLE n_domains ADD COLUMN contract_nr int after owner_id
No, there's no direct way to do that. And there's a reason for it - every query should list all the fields it needs in whatever order (and format etc) it needs them, thus making the order of the columns in one table insignificant.
If you really need to do that I can think of one workaround:
dump and save the description of the table in question (using pg_dump --schema-only --table=<schema.table> ...)
add the column you want where you want it in the saved definition
rename the table in the saved definition so not to clash with the name of the old table when you attempt to create it
create the new table using this definition
populate the new table with the data from the old table using 'INSERT INTO <new_table> SELECT field1, field2, <default_for_new_field>, field3,... FROM <old_table>';
rename the old table
rename the new table to the original name
eventually drop the old, renamed table after you make sure everything's alright
The order of columns is not irrelevant, putting fixed width columns at the front of the table can optimize the storage layout of your data, it can also make working with your data easier outside of your application code.
PostgreSQL does not support altering the column ordering (see Alter column position on the PostgreSQL wiki); if the table is relatively isolated, your best bet is to recreate the table:
CREATE TABLE foobar_new ( ... );
INSERT INTO foobar_new SELECT ... FROM foobar;
DROP TABLE foobar CASCADE;
ALTER TABLE foobar_new RENAME TO foobar;
If you have a lot of views or constraints defined against the table, you can re-add all the columns after the new column and drop the original columns (see the PostgreSQL wiki for an example).
The real problem here is that it's not done yet. Currently PostgreSQL's logical ordering is the same as the physical ordering. That's problematic because you can't get a different logical ordering, but it's even worse because the table isn't physically packed automatically, so by moving columns you can get different performance characteristics.
Arguing that it's that way by intent in design is pointless. It's somewhat likely to change at some point when an acceptable patch is submitted.
All of that said, is it a good idea to rely on the ordinal positioning of columns, logical or physical? Hell no. In production code you should never be using an implicit ordering or *. Why make the code more brittle than it needs to be? Correctness should always be a higher priority than saving a few keystrokes.
As a work around, you can in fact modify the column ordering by recreating the table, or through the "add and reorder" game
See also,
Column tetris reordering in order to make things more space-efficient
The column order is relevant to me, so I created this function. See if it helps. It works with indexes, primary key, and triggers. Missing Views and Foreign Key and other features are missing.
Example:
SELECT xaddcolumn('table', 'col3 int NOT NULL DEFAULT 0', 'col2');
Source code:
CREATE OR REPLACE FUNCTION xaddcolumn(ptable text, pcol text, pafter text) RETURNS void AS $BODY$
DECLARE
rcol RECORD;
rkey RECORD;
ridx RECORD;
rtgr RECORD;
vsql text;
vkey text;
vidx text;
cidx text;
vtgr text;
ctgr text;
etgr text;
vseq text;
vtype text;
vcols text;
BEGIN
EXECUTE 'CREATE TABLE zzz_' || ptable || ' AS SELECT * FROM ' || ptable;
--colunas
vseq = '';
vcols = '';
vsql = 'CREATE TABLE ' || ptable || '(';
FOR rcol IN SELECT column_name as col, udt_name as coltype, column_default as coldef,
is_nullable as is_null, character_maximum_length as len,
numeric_precision as num_prec, numeric_scale as num_scale
FROM information_schema.columns
WHERE table_name = ptable
ORDER BY ordinal_position
LOOP
vtype = rcol.coltype;
IF (substr(rcol.coldef,1,7) = 'nextval') THEN
vtype = 'serial';
vseq = vseq || 'SELECT setval(''' || ptable || '_' || rcol.col || '_seq'''
|| ', max(' || rcol.col || ')) FROM ' || ptable || ';';
ELSIF (vtype = 'bpchar') THEN
vtype = 'char';
END IF;
vsql = vsql || E'\n' || rcol.col || ' ' || vtype;
IF (vtype in ('varchar', 'char')) THEN
vsql = vsql || '(' || rcol.len || ')';
ELSIF (vtype = 'numeric') THEN
vsql = vsql || '(' || rcol.num_prec || ',' || rcol.num_scale || ')';
END IF;
IF (rcol.is_null = 'NO') THEN
vsql = vsql || ' NOT NULL';
END IF;
IF (rcol.coldef <> '' AND vtype <> 'serial') THEN
vsql = vsql || ' DEFAULT ' || rcol.coldef;
END IF;
vsql = vsql || E',';
vcols = vcols || rcol.col || ',';
--
IF (rcol.col = pafter) THEN
vsql = vsql || E'\n' || pcol || ',';
END IF;
END LOOP;
vcols = substr(vcols,1,length(vcols)-1);
--keys
vkey = '';
FOR rkey IN SELECT constraint_name as name, column_name as col
FROM information_schema.key_column_usage
WHERE table_name = ptable
LOOP
IF (vkey = '') THEN
vkey = E'\nCONSTRAINT ' || rkey.name || ' PRIMARY KEY (';
END IF;
vkey = vkey || rkey.col || ',';
END LOOP;
IF (vkey <> '') THEN
vsql = vsql || substr(vkey,1,length(vkey)-1) || ') ';
END IF;
vsql = substr(vsql,1,length(vsql)-1) || ') WITHOUT OIDS';
--index
vidx = '';
cidx = '';
FOR ridx IN SELECT s.indexrelname as nome, a.attname as col
FROM pg_index i LEFT JOIN pg_class c ON c.oid = i.indrelid
LEFT JOIN pg_attribute a ON a.attrelid = c.oid AND a.attnum = ANY(i.indkey)
LEFT JOIN pg_stat_user_indexes s USING (indexrelid)
WHERE c.relname = ptable AND i.indisunique != 't' AND i.indisprimary != 't'
ORDER BY s.indexrelname
LOOP
IF (ridx.nome <> cidx) THEN
IF (vidx <> '') THEN
vidx = substr(vidx,1,length(vidx)-1) || ');';
END IF;
cidx = ridx.nome;
vidx = vidx || E'\nCREATE INDEX ' || cidx || ' ON ' || ptable || ' (';
END IF;
vidx = vidx || ridx.col || ',';
END LOOP;
IF (vidx <> '') THEN
vidx = substr(vidx,1,length(vidx)-1) || ')';
END IF;
--trigger
vtgr = '';
ctgr = '';
etgr = '';
FOR rtgr IN SELECT trigger_name as nome, event_manipulation as eve,
action_statement as act, condition_timing as cond
FROM information_schema.triggers
WHERE event_object_table = ptable
LOOP
IF (rtgr.nome <> ctgr) THEN
IF (vtgr <> '') THEN
vtgr = replace(vtgr, '_#eve_', substr(etgr,1,length(etgr)-3));
END IF;
etgr = '';
ctgr = rtgr.nome;
vtgr = vtgr || 'CREATE TRIGGER ' || ctgr || ' ' || rtgr.cond || ' _#eve_ '
|| 'ON ' || ptable || ' FOR EACH ROW ' || rtgr.act || ';';
END IF;
etgr = etgr || rtgr.eve || ' OR ';
END LOOP;
IF (vtgr <> '') THEN
vtgr = replace(vtgr, '_#eve_', substr(etgr,1,length(etgr)-3));
END IF;
--exclui velha e cria nova
EXECUTE 'DROP TABLE ' || ptable;
IF (EXISTS (SELECT sequence_name FROM information_schema.sequences
WHERE sequence_name = ptable||'_id_seq'))
THEN
EXECUTE 'DROP SEQUENCE '||ptable||'_id_seq';
END IF;
EXECUTE vsql;
--dados na nova
EXECUTE 'INSERT INTO ' || ptable || '(' || vcols || ')' ||
E'\nSELECT ' || vcols || ' FROM zzz_' || ptable;
EXECUTE vseq;
EXECUTE vidx;
EXECUTE vtgr;
EXECUTE 'DROP TABLE zzz_' || ptable;
END;
$BODY$ LANGUAGE plpgsql VOLATILE COST 100;
#Jeremy Gustie's solution above almost works, but will do the wrong thing if the ordinals are off (or fail altogether if the re-ordered ordinals make incompatible types match). Give it a try:
CREATE TABLE test1 (one varchar, two varchar, three varchar);
CREATE TABLE test2 (three varchar, two varchar, one varchar);
INSERT INTO test1 (one, two, three) VALUES ('one', 'two', 'three');
INSERT INTO test2 SELECT * FROM test1;
SELECT * FROM test2;
The results show the problem:
testdb=> select * from test2;
three | two | one
-------+-----+-------
one | two | three
(1 row)
You can remedy this by specifying the column names in the insert:
INSERT INTO test2 (one, two, three) SELECT * FROM test1;
That gives you what you really want:
testdb=> select * from test2;
three | two | one
-------+-----+-----
three | two | one
(1 row)
The problem comes when you have legacy that doesn't do this, as I indicated above in my comment on peufeu's reply.
Update: It occurred to me that you can do the same thing with the column names in the INSERT clause by specifying the column names in the SELECT clause. You just have to reorder them to match the ordinals in the target table:
INSERT INTO test2 SELECT three, two, one FROM test1;
And you can of course do both to be very explicit:
INSERT INTO test2 (one, two, three) SELECT one, two, three FROM test1;
That gives you the same results as above, with the column values properly matched.
The order of the columns is totally irrelevant in relational databases
Yes.
For instance if you use Python, you would do :
cursor.execute( "SELECT id, name FROM users" )
for id, name in cursor:
print id, name
Or you would do :
cursor.execute( "SELECT * FROM users" )
for row in cursor:
print row['id'], row['name']
But no sane person would ever use positional results like this :
cursor.execute( "SELECT * FROM users" )
for id, name in cursor:
print id, name
Well, it's a visual goody for DBA's and can be implemented to the engine with minor performance loss. Add a column order table to pg_catalog or where it's suited best. Keep it in memory and use it before certain queries. Why overthink such a small eye candy.
# Milen A. Radev
The irrelevant need from having a set order of columns is not always defined by the query that pulls them. In the values from pg_fetch_row does not include the associated column name and therefore would require the columns to be defined by the SQL statement.
A simple select * from would require innate knowledge of the table structure, and would sometimes cause issues if the order of the columns were to change.
Using pg_fetch_assoc is a more reliable method as you can reference the column names, and therefore use a simple select * from.

I need to change one table datatype same as other in netezza, by just passing table name?

I have one table named L0 which is created as:
create table L0 (
name varchar,
number varchar,
address varchar
);
The data type for all the columns present in L0 is varchar,
I have another table L1 which is created as:
create table L1 (
name varchar,
number int,
address char
);
I want to convert the data types of L0 table to be same as L1 table, by just passing the table name.
I want the final query look like this:
select cast(name as varchar), cast(number as int), cast(address as char) from L0
minus
select * from L1:
What is the way to do it?
If you have group_concat UDA installed in Netezza then you can get the desired output using columns metadata view
create table L0 (
name varchar(20),
number varchar(10),
address varchar(50)
);
create table L1 (
name varchar(50),
number int,
address char(20)
);
Query :
SELECT 'select ' || system..group_concat('cast (' || a.column_name || ' as ' || b.data_type || ')') || ' from ' || a.table_name || ' minus select ' || system..group_concat(b.column_name) || ' from ' || b.table_name || ' ;'
FROM columns a
,columns b
WHERE a.column_name = b.column_name
AND a.table_name = 'L0'
AND b.table_name = 'L1'
AND a.table_catalog = 'TEST' --- this is the current database name
GROUP BY a.table_name
,b.table_name;
Output:
SELECT cast(ADDRESS AS CHARACTER(20))
,cast(NAME AS CHARACTER VARYING(50))
,cast(NUMBER AS INTEGER)
FROM L0 minus
SELECT ADDRESS
,NAME
,NUMBER
FROM L1;
EDIT:
Alternate approach using stored procedure
CREATE OR REPLACE PROCEDURE GRP_CONCAT(varchar(50),varchar(50))
RETURNS VARCHAR(ANY)
EXECUTE AS OWNER
LANGUAGE NZPLSQL AS
BEGIN_PROC
DECLARE
TABLE_NAME1 ALIAS FOR $1;
TABLE_NAME2 ALIAS for $2;
return_text varchar(10000) := 'SELECT';
x record;
BEGIN
FOR x IN select a.column_name as colname,b.data_type datatype
FROM columns a
,columns b
WHERE a.column_name = b.column_name
AND a.table_name = TABLE_NAME1
AND b.table_name = TABLE_NAME2
order by b.ordinal_position
LOOP
return_text := return_text || ' CAST ('||x.colname ||' as '||x.datatype||') , ';
end loop;
return_text := trim(return_text,' , ')||' FROM '||TABLE_NAME1 || ' minus select * from '||TABLE_NAME2||';';
return return_text;
EXCEPTION WHEN OTHERS THEN
RAISE NOTICE 'ERROR: %', SQLERRM;
RETURN 1;
END;
END_PROC;
Output :
Call GRP_CONCAT('L0','L1') ;
SELECT CAST(NAME AS CHARACTER VARYING(50))
,CAST(NUMBER AS INTEGER)
,CAST(ADDRESS AS CHARACTER(20))
FROM L0 minus
SELECT *
FROM L1;

A more efficient way to do a select insert that involves over 300 columns?

I am trying to find a more efficient way to write PL/SQL Query to
to select insert from a table with 300+ columns, to the back up version of that table (same column names + 2 extra columns).
I could simply type out all the column names in the script (example below), but with that many names, it will bother me... :(
INSERT INTO
TABLE_TEMP
(column1, column2, column3, etc)
(SELECT column1, column2, column3, etc FROM TABLE WHERE id = USER_ID);
Thanks in advance
Specify literals/null for those two extra columns.
INSERT INTO
TABLE_TEMP
SELECT t1.*, null, null FROM TABLE t1 WHERE id = USER_ID
You can pretty easily build a column list for any given table:
select table_catalog
,table_schema
,table_name
,string_agg(column_name, ', ' order by ordinal_position)
from information_schema.columns
where table_catalog = 'catalog_name'
and table_schema = 'schema_name'
and table_name = 'table_name'
group by table_catalog
,table_schema
,table_name
That should get you nearly where you need to be.
The question tag says plsql, which is Oracle or one of its variants. Here is an example of doing it in Oracle:
drop table brianl.deleteme1;
drop table brianl.deleteme2;
CREATE TABLE brianl.deleteme1
(
a INTEGER
, b INTEGER
, c INTEGER
, efg INTEGER
);
CREATE TABLE brianl.deleteme2
(
b INTEGER
, c INTEGER
, d INTEGER
, efg INTEGER
);
DECLARE
l_ownerfrom VARCHAR2 (30) := 'BRIANL';
l_tablefrom VARCHAR2 (30) := 'DELETEME1';
l_ownerto VARCHAR2 (30) := 'BRIANL';
l_tableto VARCHAR2 (30) := 'DELETEME2';
l_comma VARCHAR2 (1) := NULL;
BEGIN
DBMS_OUTPUT.put_line ('insert into ' || l_ownerto || '.' || l_tableto || '(');
FOR eachrec IN ( SELECT f.column_name
FROM all_tab_cols f INNER JOIN all_tab_cols t ON (f.column_name = t.column_name)
WHERE f.owner = l_ownerfrom
AND f.table_name = l_tablefrom
AND t.owner = l_ownerto
AND t.table_name = l_tableto
ORDER BY f.column_name)
LOOP
DBMS_OUTPUT.put_line (l_comma || eachrec.column_name);
l_comma := ',';
END LOOP;
DBMS_OUTPUT.put_line (') select ');
l_comma := NULL;
FOR eachrec IN ( SELECT f.column_name
FROM all_tab_cols f INNER JOIN all_tab_cols t ON (f.column_name = t.column_name)
WHERE f.owner = l_ownerfrom
AND f.table_name = l_tablefrom
AND t.owner = l_ownerto
AND t.table_name = l_tableto
ORDER BY f.column_name)
LOOP
DBMS_OUTPUT.put_line (l_comma || eachrec.column_name);
l_comma := ',';
END LOOP;
DBMS_OUTPUT.put_line (' from ' || l_ownerfrom || '.' || l_tablefrom || ';');
END;
This results in this output:
insert into BRIANL.DELETEME2(
B
,C
,EFG
) select
B
,C
,EFG
from BRIANL.DELETEME1;
Nicely formatted:
INSERT INTO brianl.deleteme2 (b, c, efg)
SELECT b, c, efg
FROM brianl.deleteme1;