How to read the same column from every table in a database? - sql

I have a huge database with 400+ tables. Each table has the same column id for the Primary key and "timestamp_modify" in which the last change of the table is done.
So what I want are 2 things:
Now I want a list of all changes by ID and table name like:
Table | id | timestamp_modiy
Kid | 1 | 24.10.2021 00:01
Parent | 1000 | 24.10.2021 00:02
The only, very bad way I could come up with, is that I make a view in which I include every damn table by hand and read out the values...
Is there a better way?

How about a pipelined function?
Just setting datetime format (you don't have to do that):
SQL> alter session set nls_date_format = 'dd.mm.yyyy hh24:mi:ss';
Session altered.
Types:
SQL> create or replace type t_row as object
2 (table_name varchar2(30),
3 id number,
4 timestamp_modify date)
5 /
Type created.
SQL> create or replace type t_tab is table of t_row;
2 /
Type created.
Function: querying user_tab_columns, its cursor FOR loop fetches tables that contain both ID and TIMESTAMP_MODIFY columns, dynamically creates select statement to return the last (MAX function, to avoid too_many_rows) columns' values for the last TIMESTAMP_MODIFY value (returned by the subquery).
SQL> create or replace function f_test
2 return t_tab pipelined
3 as
4 l_str varchar2(500);
5 l_id number;
6 l_timestamp_modify date;
7 begin
8 for cur_r in (select table_name from user_tab_columns
9 where column_name = 'ID'
10 intersect
11 select table_name from user_tab_columns
12 where column_name = 'TIMESTAMP_MODIFY'
13 )
14 loop
15 l_str := 'select max(a.id) id, max(a.timestamp_modify) timestamp_modify ' ||
16 'from ' || cur_r.table_name || ' a ' ||
17 'where a.timestamp_modify = ' ||
18 ' (select max(b.timestamp_modify) ' ||
19 ' from ' || cur_r.table_name || ' b ' ||
20 ' where b.id = a.id)';
21 execute immediate l_str into l_id, l_timestamp_modify;
22 pipe row(t_row(cur_r.table_name, l_id, l_timestamp_modify));
23 end loop;
24 end;
25 /
Function created.
Testing:
SQL> select * from table(f_test);
TABLE_NAME ID TIMESTAMP_MODIFY
------------------------------ ---------- -------------------
TABA 1 24.10.2021 14:59:29
TAB_1 1 24.10.2021 15:03:16
TAB_2 25 24.10.2021 15:03:36
TEST 5 24.10.2021 15:04:24
SQL>

Yes, the only way is to union all all tables, like:
select id, timestamp_modify
from kid
union all
select id, timestamp_modify
from parent
union all
...
The performance will be awful, since all the tables will be scanned every time :(
I think that you might reconsider you db design...

You can build a procedure for this, but even so it will have some impact in performance. Although there is a loop, with SQL Dynamic, you might only need 400 iterations, and in each one you will insert all the ids of that table.
I am taking some assumptions
You want all the IDs and their corresponding timestamp_modify per table
I create a table to store the results. If you use it with the same name always it will recycle the object. If you not, you can keep a history
I am assuming that only one timestamp_modify row is present per ID
I filter only the tables of your schema that contain both columns.
The table contains also the table_name that you can identify where the record is coming from.
One example
create or replace procedure pr_build_output ( p_tmp_table in varchar2 default 'TMP_RESULT' )
is
vcounter pls_integer;
vsql clob;
vtimestamp date; -- or timestamp
begin
-- create table to store results
select count(*) into vcounter from all_tables where table_name = upper(p_tmp_table) and owner = 'MY_SCHEMA';
if vcounter = 1
then
execute immediate ' drop table '||p_tmp_table||' purge ' ;
end if;
vsql := ' create table '||p_tmp_table||'
( table_name varchar2(128) ,
id number,
timestamp_modify date -- or timestamp
) ';
execute immediate vsql ;
-- Populate rows
for h in
( select a.table_name from all_tables a
where a.owner = 'MY_SCHEMA'
and a.table_name in ( select distinct b.table_name from all_tab_columns b where b.owner = 'MY_SCHEMA'
and b.column_name = 'ID' and b.column_name = 'TIMESTAMP_MODIFY'
)
)
loop
vsql := ' insert into '||p_tmp_table||' ( table_name , id, timestamp_modify )
select '''||h.table_name||''' as table_name , id , timestamp_modify
from my_schema.'||h.table_name||'
' ;
execute immediate vsql ;
commit ;
end loop;
exception when others then raise;
end;
/

Related

INSERT INTO TAB SELECT * FROM TABLE TAB#db2 - Too many values

I'm trying to move rows between two tables which have many columns.
The table columns are identical other than the destination table (tab#db2) has a few more columns which causes a simple INSERT to fail.
I'd like to use a simple PL/SQL statement to build a list of the columns in tab#db2 dynamically instead of typing out the names of col1, col2, etc in the INSERT and SELECT clause. Example
declare a variable as var_col_list
set col_list = output of select * from tab (omitting rows)
INSERT INTO TAB *var_col_list* SELECT *var_cols_list* FROM TABLE TAB#db2
I've researched using %rowtype but cannot find a suitable example that would take less time than simply writing out the names of the columns!
Any advice is greatly appreciated
If you use e.g. TOAD, you can right-click the table and let it Generate statement - in your case, that would be INSERT. You'd slightly modify it (remove columns you don't need) and that's all.
Otherwise, this is how you might do it semi-manually.
This is my source table:
SQL> SELECT * FROM dept;
DEPTNO DNAME LOC
---------- -------------------- --------------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON
Target table doesn't contain all columns:
SQL> CREATE TABLE target
2 (
3 deptno NUMBER,
4 dname VARCHAR2 (20)
5 );
Table created.
Code which loops through all TARGET table columns (i.e. a table which has less columns) and composes the INSERT INTO statement:
SQL> DECLARE
2 l_str VARCHAR2 (1000);
3 BEGIN
4 FOR cur_r IN (SELECT column_name
5 FROM user_tab_columns
6 WHERE table_name = 'TARGET')
7 LOOP
8 l_str := l_str || ', ' || cur_r.column_name;
9 END LOOP;
10
11 l_str :=
12 'insert into target select ' || LTRIM (l_str, ', ') || ' from dept';
13 DBMS_OUTPUT.put_line (l_str);
14
15 EXECUTE IMMEDIATE l_str;
16 END;
17 /
insert into target select DEPTNO, DNAME from dept --> this is the L_STR contents
PL/SQL procedure successfully completed.
SQL> SELECT * FROM target;
DEPTNO DNAME
---------- --------------------
10 ACCOUNTING
20 RESEARCH
30 SALES
40 OPERATIONS
Seems to be OK.
Using the solution provided by Littefoot, I made some minor tweaks to fit my requirement perfectly:
SQL> create table taba (col1 number,col2 number);
SQL> insert into taba values (1,2);
SQL> select * from taba;
COL1 COL2
---------- ----------
1 2
SQL> create table tabb (col1 number,col2 number, col3 number);
SQL> DECLARE
l_str VARCHAR2 (32767);
BEGIN
FOR cur_r IN (SELECT column_name
FROM user_tab_columns
WHERE table_name = 'TABA'
order by column_id asc)
LOOP
l_str := l_str || ', ' || cur_r.column_name;
END LOOP;
l_str :=
'insert into tabb (' || LTRIM (l_str, ', ') || ') ' ||' select ' || LTRIM (l_str, ', ') || ' from taba';
DBMS_OUTPUT.put_line (l_str);
EXECUTE IMMEDIATE l_str;
END;
/
Output of l_str (SQL INSERT):
insert into tabb (COL1, COL2) select COL1, COL2 from taba
Result:
SQL> select * from tabb;
COL1 COL2 COL3
---------- ---------- ----------
1 2

Oracle- creating dynamic function for deleting tables based on cursor

I'm trying to build a dynamic function in Oracle using a cursor for all the tables that need to be dropped and re-created again. For example, I have the following example table structure:
CREATE TABLE All_tmp_DATA AS
(SELECT 'T_tmp_test1' As Table_NM, 'TEST1' As Process_name FROM DUAL UNION ALL
SELECT 'T_tmp_test2' As Table_NM, 'TEST1' As Process_name FROM DUAL UNION ALL
SELECT 'T_tmp_test3' As Table_NM, 'TEST1' As Process_name FROM DUAL)
The above tables starting with "T_tmp" represent all the tables in the database which needs to be dropped if their counts are >1 when starting the TEST1 process. I really need a function to pass in the parameter Process_name where I can input "TEST1", and build a loop using a cursor by binding it to the Table_NM from All_tmp_DATA and inserting it into table_name in the following code:
BEGIN
SELECT count(*)
INTO l_cnt
FROM user_tables
WHERE table_name = 'MY_TABLE';
IF l_cnt = 1 THEN
EXECUTE IMMEDIATE 'DROP TABLE my_table';
END IF;
END;
In the beginning, I'd suggest you not to use mixed case when naming Oracle objects.
Test case:
SQL> select * From all_tmp_data;
TABLE_NM PROCE
----------- -----
T_tmp_test1 TEST1
T_tmp_test2 TEST1
T_tmp_test3 TEST1
SQL> create table "T_tmp_test1" as select * From dept;
Table created.
SQL> -- I don't have "T_tmp_test2"
SQL> create table "T_tmp_test3" as select * From emp;
Table created.
SQL>
SQL> select table_name From user_Tables where upper(table_name) like 'T_TMP%';
TABLE_NAME
------------------------------
T_tmp_test3
T_tmp_test1
Procedure which drops tables contained in ALL_TMP_DATA:
as opposed to your code, I concatenated table name with DROP
as you use table names with mixed case, you have to enclose their names into double quotes, always (did I say not do use that?)
As the final select shows, those tables don't exist any more.
SQL> declare
2 l_cnt number;
3 begin
4 for cur_r in (select table_nm from all_tmp_data) loop
5 select count(*) into l_cnt
6 from user_tables
7 where table_name = cur_r.table_nm;
8
9 if l_cnt > 0 then
10 execute immediate ('drop table "' || cur_r.table_nm || '"');
11 end if;
12 end loop;
13 end;
14 /
PL/SQL procedure successfully completed.
SQL> select table_name From user_Tables where upper(table_name) like 'T_TMP%';
no rows selected
SQL>
As of the process column: I have no idea what is it used for so I did exactly that - didn't use it.
You can use the exception handling to handle such scenario directly as follows:
DECLARE
TABLE_DOES_NOT_EXIST EXCEPTION;
PRAGMA EXCEPTION_INIT ( TABLE_DOES_NOT_EXIST, -00942 );
BEGIN
FOR CUR_R IN (
SELECT TABLE_NM
FROM ALL_TMP_DATA
) LOOP
BEGIN
EXECUTE IMMEDIATE 'drop table "' || cur_r.table_nm || '"';
DBMS_OUTPUT.PUT_LINE('"' || cur_r.table_nm || '" table dropped.');
EXCEPTION
WHEN TABLE_DOES_NOT_EXIST THEN
DBMS_OUTPUT.PUT_LINE('"' || cur_r.table_nm || '" table does not exists');
END;
END LOOP;
END;
/

How to check if all the tables in database are modified after an Update activity is performed on columns of tables?

I have to update all the tables having column name like '%DIV%' with a value DD wherever it is MG , I have written the script for it , but I am not getting the idea of how to verify if columns of all the tables are updated to value DD after the activity is performed. I have written this query .
SELECT 'SELECT '||OWNER||'.'||TABLE_NAME||', '||COLUMN_NAME||' FROM '||OWNER||'.'||TABLE_NAME||' WHERE '||COLUMN_NAME||' = ''MG'' ;'
FROM RADHA.CHANGE_TABLE
WHERE VALID_FLAG='Y'
I was planning to make a table structure like
OWNER TABLE_NAME PREV_COUNT
The PREV_COUNT will hold the count of rows having Column Value as MG and after the activity is performed , I will verify with following query if the corresponding rows have been updated to DD .
SELECT 'SELECT '||OWNER||'.'||TABLE_NAME||', '||COLUMN_NAME||' FROM '||OWNER||'.'||TABLE_NAME||' WHERE '||COLUMN_NAME||' = ''DD'' ;' FROM RADHA.CHANGE_TABLE WHERE VALID_FLAG='Y'
And the output of this query would go into table
OWNER TABLE_NAME NEW_COUNT
But I am not able to get how to fetch records from the Select query as it is the string which is written inside the select query but I want the result set such that I can insert the records in my table mentioned above, please guide how to approach further
I don't have your tables, but - based on Scott's sample schema, here's a script which search through all its tables for a column named JOB (line #8) and checks how many of them have value that looks like (hint: like) CLERK in it (line #12).
See how it works, adjust it so that it works for you.
SQL> DECLARE
2 l_str VARCHAR2(500);
3 l_cnt NUMBER := 0;
4 BEGIN
5 FOR cur_r IN (SELECT u.table_name, u.column_name
6 FROM user_tab_columns u, user_tables t
7 WHERE u.table_name = t.table_name
8 AND u.column_name = 'JOB'
9 )
10 LOOP
11 l_str := 'SELECT COUNT(*) FROM ' || cur_r.table_name ||
12 ' WHERE ' || cur_r.column_name || ' like (''%CLERK%'')';
13
14 EXECUTE IMMEDIATE (l_str) INTO l_cnt;
15
16 IF l_cnt > 0 THEN
17 dbms_output.put_line(l_cnt ||' : ' || cur_r.table_name);
18 END IF;
19 END LOOP;
20 END;
21 /
4 : EMP --> there are 4 CLERKs in the EMP table
PL/SQL procedure successfully completed.
SQL>

How to select the column name that contains maximum number of distinct values? - Oracle SQL

Here is my current query:
SELECT c.COLUMN_NAME, t.NUM_ROWS
FROM ALL_TAB_COLUMNS c
INNER JOIN ALL_TABLES t ON t.OWNER = c.OWNER AND t.TABLE_NAME = c.TABLE_NAME
WHERE c.TABLE_NAME='MY_TABLE_NAME'
AND c.OWNER = 'MY_SCHEMA_NAME'
What this does is retrieve both the name of each column in my table along with the number of rows in each column.
What I need to do is retrieve the number of distinct values present in each column and then ultimately determine which column has the maximum number of distinct entries. How would I go about doing that given my current query?
Is there a better way to achieve what I want to do? Is dynamic SQL necessary?
You can use XMLQUERY for fetching the desired result.
Oracle data setup:
SQL> CREATE TABLE TEST_SO (COL1 NUMBER, COL2 VARCHAR(20));
Table created.
SQL>
SQL> INSERT INTO TEST_SO (COL1,COL2) VALUES (1, 'TEJASH');
1 row created.
SQL> INSERT INTO TEST_SO (COL1,COL2) VALUES (2, 'TEJASH1');
1 row created.
SQL> INSERT INTO TEST_SO (COL1,COL2) VALUES (3, 'TEJASH2');
1 row created.
SQL> INSERT INTO TEST_SO (COL1,COL2) VALUES (2, 'TEJASH3');
1 row created.
SQL> INSERT INTO TEST_SO (COL1,COL2) VALUES (2, 'TEJASH');
1 row created.
SQL>
Now, COL2 has 4 distinct values and COL1 has 3 distinct values.
Use the following query to fetch the COL2 and 4 (as it is greater than 3 (distinct values in COL1)) as distinct values in it.
Your Query:
SQL> SELECT
2 C.COLUMN_NAME,
3 TO_NUMBER(XMLQUERY('/ROWSET/ROW/C/text()'
4 PASSING XMLTYPE(DBMS_XMLGEN.GETXML(
5 'select count(distinct "'
6 || C.COLUMN_NAME
7 || '") as c '
8 || 'from "'
9 || C.TABLE_NAME
10 || '"')) RETURNING CONTENT)) AS DISTINCT_VALS
11 FROM USER_TAB_COLUMNS C
12 WHERE C.TABLE_NAME = 'TEST_SO'
13 ORDER BY DISTINCT_VALS DESC NULLS LAST
14 FETCH FIRST ROW WITH TIES;
COLUMN_NAME DISTINCT_VALS
--------------- -------------
COL2 4
SQL>
Cheers!!
Since
you are ready to use the num_rows from all_% view and
If you have statistics gathered and
some possible discrepancy is acceptable, you might use statistics data database has gathered from all_tab_col_statistics.
Like this.
select num_distinct, column_name
from all_tab_col_statistics
where table_name = 'TABLE_NAME_UPPERCASE'
order by num_distinct desc
fetch first row with ties;
Again, use this please when some tolerance is acceptable.
Though the table statistics usually is being gathered on a regular basis (depends on DBA) there could be a kind gap between gathered and real value.
For a single table you could also a procedure like this:
DECLARE
CURSOR Cols IS
SELECT COLUMN_NAME
FROM USER_TAB_COLUMNS
WHERE TABLE_NAME = 'MY_TABLE_NAME'
ORDER BY COLUMN_ID;
cur INTEGER := DBMS_SQL.OPEN_CURSOR;
columnCount INTEGER := 0;
describeColumns DBMS_SQL.DESC_TAB2;
res INTEGER;
distinctValues NUMBER;
m NUMBER := -1;
c INTEGER := 0;
sqlstr VARCHAR2(30000);
BEGIN
FOR aCol IN Cols LOOP
sqlstr := sqlstr || ',COUNT(DISTINCT '||aCol.COLUMN_NAME||') AS '||aCol.COLUMN_NAME;
END LOOP;
sqlstr := REGEXP_REPLACE(sqlstr, '^,', 'SELECT ')||' FROM MY_TABLE_NAME';
DBMS_SQL.PARSE(cur, sqlStr, DBMS_SQL.NATIVE);
DBMS_SQL.DESCRIBE_COLUMNS2(cur, columnCount, describeColumns);
FOR i IN 1..columnCount LOOP
DBMS_SQL.DEFINE_COLUMN(cur, i, distinctValues);
END LOOP;
res := DBMS_SQL.EXECUTE(cur);
res := DBMS_SQL.FETCH_ROWS(cur); -- no loop required as you get always exactly one row
FOR i IN 1..columnCount LOOP
DBMS_SQL.COLUMN_VALUE(cur, i, distinctValues);
IF distinctValues > m THEN
m := distinctValues;
c := i;
END IF;
DBMS_OUTPUT.PUT_LINE ( describeColumns(i).col_name ||': '|| distinctValues );
END LOOP;
DBMS_OUTPUT.PUT_LINE ( 'Max distinct values at '||describeColumns(c).col_name ||': '|| m );
END;

How to select columns from a table which have non null values?

I have a table containing hundreds of columns many of which are null, and I would like have my select statement so that only those columns containing a value are returned. It would help me analyze data better. Something like:
Select (non null columns) from tablename;
I want to select all columns which have at least one non-null value.
Can this be done?
Have a look as statistics information, it may be useful for you:
SQL> exec dbms_stats.gather_table_stats('SCOTT','EMP');
PL/SQL procedure successfully completed.
SQL> select num_rows from all_tables where owner='SCOTT' and table_name='EMP';
NUM_ROWS
----------
14
SQL> select column_name,nullable,num_distinct,num_nulls from all_tab_columns
2 where owner='SCOTT' and table_name='EMP' order by column_id;
COLUMN_NAME N NUM_DISTINCT NUM_NULLS
------------------------------ - ------------ ----------
EMPNO N 14 0
ENAME Y 14 0
JOB Y 5 0
MGR Y 6 1
HIREDATE Y 13 0
SAL Y 12 0
COMM Y 4 10
DEPTNO Y 3 0
8 rows selected.
For example you can check if NUM_NULLS = NUM_ROWS to identify "empty" columns.
Reference: ALL_TAB_COLUMNS, ALL_TABLES.
Use the below:
SELECT *
FROM information_schema.columns
WHERE table_name = 'Table_Name' and is_nullable = 'NO'
Table_Name has to be replaced accordingly...
select column_name
from user_tab_columns
where table_name='Table_name' and num_nulls=0;
Here is simple code to get non null columns..
I don't think this can be done in a single query. You may need some plsql to first test what columns contain data and put together a statement based on that information. Of course, if the data in your table changes you have to recreate the statement.
declare
l_table varchar2(30) := 'YOUR_TABLE';
l_statement varchar2(32767);
l_test_statement varchar2(32767);
l_contains_value pls_integer;
-- select column_names from your table
cursor c is
select column_name
,nullable
from user_tab_columns
where table_name = l_table;
begin
l_statement := 'select ';
for r in c
loop
-- If column is not nullable it will always contain a value
if r.nullable = 'N'
then
-- add column to select list.
l_statement := l_statement || r.column_name || ',';
else
-- check if there is a row that has a value for this column
begin
l_test_statement := 'select 1 from dual where exists (select 1 from ' || l_table || ' where ' ||
r.column_name || ' is not null)';
dbms_output.put_line(l_test_statement);
execute immediate l_test_statement
into l_contains_value;
-- Yes, add column to select list
l_statement := l_statement || r.column_name || ',';
exception
when no_data_found then
null;
end;
end if;
end loop;
-- create a select statement
l_statement := substr(l_statement, 1, length(l_statement) - 1) || ' from ' || l_table;
end;
select rtrim (xmlagg (xmlelement (e, column_name || ',')).extract ('//text()'), ',') col
from (select column_name
from user_tab_columns
where table_name='<table_name>' and low_value is not null)
This block determines all columns in the table, loops through them in dynamic SQL and checks if they are null, then constructs a DBMS output query of the non-null query.
All you have to do is run the returned query.
I've included the exclusion of PKs and BLOB columns.
Obviously, this is quite slow as going through columns one by one, and it's not going to be great for very hot tables, as data may change too quickly, but this works for me as I control traffic in dev env.
DECLARE
l_table_name VARCHAR2(255) := 'XXXX';
l_counter NUMBER;
l_sql CLOB;
BEGIN
FOR r_col IN (SELECT *
FROM user_tab_columns tab_col
WHERE table_name = l_table_name
AND data_type NOT IN ('BLOB')
AND column_name NOT IN (SELECT column_name
FROM user_cons_columns con_col
JOIN user_constraints cons ON con_col.constraint_name = cons.constraint_name AND con_col.table_name = cons.table_name
WHERE con_col.table_name = tab_col.table_name
AND constraint_type = 'P')
ORDER BY column_id)
LOOP
EXECUTE IMMEDIATE 'SELECT COUNT(1) FROM '||l_table_name||' WHERE '||r_col.column_name||' IS NOT NULL'
INTO l_counter;
IF l_counter > 0 THEN
IF l_sql IS NULL THEN
l_sql := r_col.column_name;
ELSE
l_sql := l_sql||','||r_col.column_name;
END IF;
END IF;
END LOOP;
l_sql := 'SELECT '||l_sql||CHR(10)
||'FROM '||l_table_name;
----------
DBMS_OUTPUT.put_line(l_sql);
END;
What you're asking to do is establish a dependency on each row in the whole result. This is in fact not ever what you want. Just think of the ramifications if in one row every column had a value of '0' -- suddenly the schema of your result set grows to include all of those previously "empty" columns. You're effectively growing the badness of '*' exponentially, now your result set is not dependent on just the table's meta-data -- but your whole result set is dependent on the plain data.
What you want to do is just select the fields that have what you want, and not deviate from this simple plan.