Troubleshooting PG Function - sql

I have this function:
CREATE OR REPLACE FUNCTION CREATE_AIRSPACE_AVAILABILITY_RECORD
(cur_user VARCHAR, start_time VARCHAR, start_date VARCHAR, end_time VARCHAR, end_date VARCHAR, airspace_name VARCHAR)
RETURNS VOID AS '
DECLARE
c_user ALIAS for $1;
BEGIN
IF start_time IS NULL OR
start_date IS NULL OR
end_time IS NULL OR
end_date IS NULL THEN
INSERT INTO c_user.AIRSPACE_AVAILABILITY
(ASP_AIRSPACE_NM, ASA_TIME_ID, ASA_START_DT, ASA_END_DT)
SELECT airspace_name,
1,
ABP.ABP_START_DT,
ABP.ABP_STOP_DT
FROM ABP
WHERE EXISTS
(SELECT ASP.ASP_AIRSPACE_NM
FROM AIRSPACE ASP
WHERE ASP.ASP_AIRSPACE_NM = airspace_name);
ELSIF start_time IS NOT NULL AND
start_date IS NOT NULL AND
end_time IS NOT NULL AND
end_date IS NOT NULL THEN
INSERT INTO c_user.AIRSPACE_AVAILABILITY
(ASP_AIRSPACE_NM, ASA_TIME_ID, ASA_START_DT, ASA_END_DT)
SELECT airspace_name,
1,
TO_DATE(start_date||start_time,''YYMMDDHH24MI''),
TO_DATE(end_date||end_time,''YYMMDDHH24MI'')
FROM DUAL
WHERE EXISTS
(SELECT ASP.ASP_AIRSPACE_NM
FROM c_user.AIRSPACE ASP
WHERE ASP.ASP_AIRSPACE_NM = airspace_name);
END IF;
END ;
' LANGUAGE plpgsql;
I try calling it like so:
select * from CREATE_AIRSPACE_AVAILABILITY_RECORD('user1','','','','','');
and I get this error:
ERROR: schema "c_user" does not exist
SQL state: 3F000
Context: SQL statement "INSERT INTO c_user.AIRSPACE_AVAILABILITY (ASP_AIRSPACE_NM, ASA_TIME_ID, ASA_START_DT, ASA_END_DT) SELECT $1 , 1, TO_DATE( $2 || $3 ,'YYMMDDHH24MI'), TO_DATE( $4 || $5 ,'YYMMDDHH24MI') FROM DUAL WHERE EXISTS (SELECT ASP.ASP_AIRSPACE_NM FROM c_user.AIRSPACE ASP WHERE ASP.ASP_AIRSPACE_NM = $1 )"
PL/pgSQL function "create_airspace_availability_record" line 23 at SQL statement
Why isn't c_user being replaced with my param (user1)?

When you write something like this:
INSERT INTO c_user.AIRSPACE_AVAILABILITY
the database assumes that the 'c_user' is a namespace, not a variable. If you want to use the value of the c_user variable in such a query, you should use execute statement:
execute 'INSERT INTO ' || c_user || '.AIRSPACE_AVAILABILITY';

Why isn't c_user being replaced with my param (user1)?
Because the object names (metadata) cannot be substituted with the variables (data).
Choosing the metadata dynamically is usually not the best design, but if you need to live with it, you should do something along the lines of
EXECUTE 'INSERT INTO ' || c_user || '.AIRSPACE_AVAILABILITY ' …
instead.
See here for more details.

Related

Oracle PL/SQL ORA-00904: invalid identifier

I'm trying to declare a string and use it in a select statement but it's throwing ORA-00904: invalid identifier.
DECLARE
var_laufi VARCHAR2(20) := 'JEIV';
BEGIN
EXECUTE IMMEDIATE q'[
WITH aux AS (
SELECT DISTINCT
zbuag_id AS ca_dat
FROM
vw_penddv2
WHERE
category = 'Issue'
), aux2 AS (
SELECT DISTINCT
vkont AS ca_lock
FROM
cdc.uap_dfkklocks#rbip
WHERE
laufi = ]'
|| var_laufi
|| q'[
AND tdate >= to_char(sysdate, 'YYYYMMDD')
)
SELECT
*
FROM
aux a
FULL OUTER JOIN aux2 b ON b.ca_lock = a.ca_dat
WHERE
( a.ca_dat IS NULL
OR b.ca_lock IS NULL )
]'
;
END;
However, if I just try to display the variable itself, it works fine.
SET SERVEROUTPUT ON;
DECLARE
var_laufi VARCHAR2(20) := 'JEIV';
BEGIN
dbms_output.put_line(var_laufi);
END;
Result:
JEIV
PL/SQL procedure successfully completed.
I'm missing something here but I can't figure out what is it.
Dynamic SQL is evil if you misuse it. Rule of thumb: never execute it if you didn't check what you're executing! How? Display the statement. Here's how:
DECLARE
var_laufi VARCHAR2 (20) := 'JEIV';
var_str VARCHAR2 (2000);
BEGIN
var_str := q'[
WITH aux AS (
SELECT DISTINCT
zbuag_id AS ca_dat
FROM
vw_penddv2
WHERE
category = 'Issue'
), aux2 AS (
SELECT DISTINCT
vkont AS ca_lock
FROM
cdc.uap_dfkklocks#rbip
WHERE
laufi = ]' || var_laufi || q'[
AND tdate >= to_char(sysdate, 'YYYYMMDD')
)
SELECT
*
FROM
aux a
FULL OUTER JOIN aux2 b ON b.ca_lock = a.ca_dat
WHERE
( a.ca_dat IS NULL
OR b.ca_lock IS NULL )
]';
DBMS_OUTPUT.put_line (var_str);
END;
/
The result (formatted):
WITH
aux
AS
(SELECT DISTINCT zbuag_id AS ca_dat
FROM vw_penddv2
WHERE category = 'Issue'),
aux2
AS
(SELECT DISTINCT vkont AS ca_lock
FROM cdc.uap_dfkklocks#rbip
WHERE laufi = JEIV --> here
AND tdate >= TO_CHAR (SYSDATE, 'YYYYMMDD'))
SELECT *
FROM aux a FULL OUTER JOIN aux2 b ON b.ca_lock = a.ca_dat
WHERE ( a.ca_dat IS NULL
OR b.ca_lock IS NULL)
See the comment? where laufi = JEIV is invalid, should have been enclosed into single quotes.
Also, is tdate really a string? You're comparing it to sysdate represented as a string in specified format. If that's so, bad, BAD idea to store DATE datatype values as strings.
CREATE OR REPLACE PROCEDURE SP_PARENT_PRODUCT_JOINS (V_C_ID in number, c1 out sys_refcursor)
is
V_sql varchar2(2000);
Begin
-----------(STEP_1_JOINS)---------------------
V_sql:=
'
SELECT Country.C_id,Country.C_name,product.C_id
FROM Country
INNER JOIN Product
ON country.C_id = product.C_id
where V_C_ID='||V_C_ID;
EXECUTE IMMEDIATE(V_SQL);
-------------(STEP_2_TRUNCATE DATA)----------------
EXECUTE IMMEDIATE 'TRUNCATE TABLE Product_det';
dbms_output.put_line(V_SQL);
open c1 for V_sql;
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line(sqlerrm);
End;
error;
ORA-00904: "V_C_ID": invalid identifier

Errors in PLSQL -

Morning,
I'm trying to write a script that will convert Unload tables (UNLD to HDL files) creating a flat file using PLSQL. I keep getting syntax errors trying to run it and would appreciate some help from an expert out there!
Here are the errors:
Error(53,21): PLS-00330: invalid use of type name or subtype name
Error(57,32): PLS-00222: no function with name 'UNLDTABLE' exists in this scope
Our guess is that the unldTable variable is being treated as a String, rather than a database table object (Not really expereinced in PLSQL)
CREATE OR REPLACE PROCEDURE UNLD_TO_HDL (processComponent IN VARCHAR2)
IS
fHandle UTL_FILE.FILE_TYPE;
concatData VARCHAR2(240);
concatHDLMetaTags VARCHAR2(240);
outputFileName VARCHAR2(240);
TYPE rowArrayType IS TABLE OF VARCHAR2(240);
rowArray rowArrayType;
emptyArray rowArrayType;
valExtractArray rowArrayType;
hdlFileName VARCHAR2(240);
unldTable VARCHAR2(240);
countUNLDRows Number;
dataType VARCHAR2(240);
current_table VARCHAR2(30);
value_to_char VARCHAR2(240);
BEGIN
SELECT HDL_FILE_NAME
INTO hdlFileName
FROM GNC_HDL_CREATION_PARAMS
WHERE PROCESS_COMPONENT = processComponent;
SELECT UNLD_TABLE
INTO unldTable
FROM GNC_HDL_CREATION_PARAMS
WHERE PROCESS_COMPONENT = processComponent
FETCH NEXT 1 ROWS ONLY;
SELECT LISTAGG(HDL_META_TAG,'|')
WITHIN GROUP(ORDER BY HDL_META_TAG)
INTO concatHDLMetaTags
FROM GNC_MIG_CONTROL
WHERE HDL_COMP = processComponent;
SELECT DB_FIELD
BULK COLLECT INTO valExtractArray
FROM GNC_MIG_CONTROL
WHERE HDL_COMP = processComponent
ORDER BY HDL_META_TAG;
fHandle := UTL_FILE.FOPEN('./', hdlFileName, 'W');
UTL_FILE.PUTF(fHandle, concatHDLMetaTags + '\n');
SELECT num_rows INTO countUNLDRows FROM user_tables where table_name = unldTable;
FOR row in 1..countUNLDRows LOOP
rowArray := emptyArrayType;
FOR value in 1..valExtractArray.COUNT LOOP
rowArray.extend();
SELECT data_type INTO dataType FROM all_tab_columns where table_name = unldTable AND column_name = valExtractArray(value);
IF dataType = 'VARCHAR2' THEN (SELECT valExtractArray(value) INTO value_to_char FROM current_table WHERE ROWNUM = row);
ELSIF dataType = 'DATE' THEN (SELECT TO_CHAR(valExtractArray(value),'YYYY/MM/DD') INTO value_to_char FROM current_table WHERE ROWNUM = row);
ELSIF dataType = 'NUMBER' THEN (SELECT TO_CHAR(valExtractArray(value)) INTO value_to_char FROM current_table WHERE ROWNUM = row);
ENDIF;
rowArray(value) := value_to_char;
END LOOP;
concatData := NULL;
FOR item in 1..rowArray.COUNT LOOP
IF item = rowArray.COUNT
THEN concatData := (COALESCE(concatData,'') || rowArray(item));
ELSE concatData := (COALESCE(concatData,'') || rowArray(item) || '|');
END IF;
END LOOP;
UTL_FILE.PUTF(fHandle, concatData + '/n');
END LOOP;
UTL_FILE.FCLOSE(fHandle);
END;
Thanks,
Adam
I believe it is just an overlook in your code. You define unldTable as a varchar, which is used correctly until you try to access it as if it were a varray on line 51
rowArray(value) := unldTable(row).valExtractArray(value);
Given that you have not defined it as a varray, unldTable(row) is making the interpreter believe that you are referring to a function.
EDIT
Now that you have moved on, you should resolve the problem of invoking SELECT statements on tables that are unknown at runtime. To do so you need to make use of Dynamic SQL; you can do it in several way, the most direct being an Execute immediate statement in your case:
mystatement := 'SELECT valExtractArray(value) INTO :value_to_char FROM ' || current_table || ' WHERE ROWNUM = ' || row;
execute immediate mystatement USING OUT value_to_char;
It looks like you need to generate a cursor as
select [list of columns from GNC_MIG_CONTROL.DB_FIELD]
from [table name from GNC_HDL_CREATION_PARAMS.UNLD_TABLE]
Assuming setup like this:
create table my_table (business_date date, id integer, dummy1 varchar2(1), dummy2 varchar2(20));
create table gnc_hdl_creation_params (unld_table varchar2(30), process_component varchar2(30));
create table gnc_mig_control (db_field varchar2(30), hdl_comp varchar2(30), hdl_meta_tag integer);
insert into my_table(business_date, id, dummy1, dummy2) values (date '2018-01-01', 123, 'X','Some more text');
insert into gnc_hdl_creation_params (unld_table, process_component) values ('MY_TABLE', 'XYZ');
insert into gnc_mig_control (db_field, hdl_comp, hdl_meta_tag) values ('BUSINESS_DATE', 'XYZ', '1');
insert into gnc_mig_control (db_field, hdl_comp, hdl_meta_tag) values ('ID', 'XYZ', '2');
insert into gnc_mig_control (db_field, hdl_comp, hdl_meta_tag) values ('DUMMY1', 'XYZ', '3');
insert into gnc_mig_control (db_field, hdl_comp, hdl_meta_tag) values ('DUMMY2', 'XYZ', '4');
You could build a query like this:
select unld_table, listagg(expr, q'[||'|'||]') within group (order by hdl_meta_tag) as expr_list
from ( select t.unld_table
, case tc.data_type
when 'DATE' then 'to_char('||c.db_field||',''YYYY-MM-DD'')'
else c.db_field
end as expr
, c.hdl_meta_tag
from gnc_hdl_creation_params t
join gnc_mig_control c
on c.hdl_comp = t.process_component
left join user_tab_columns tc
on tc.table_name = t.unld_table
and tc.column_name = c.db_field
where t.process_component = 'XYZ'
)
group by unld_table;
Output:
UNLD_TABLE EXPR_LIST
----------- --------------------------------------------------------------------------------
MY_TABLE to_char(BUSINESS_DATE,'YYYY-MM-DD')||'|'||ID||'|'||DUMMY1||'|'||DUMMY2
Now if you plug that logic into a PL/SQL procedure you could have something like this:
declare
processComponent constant gnc_hdl_creation_params.process_component%type := 'XYZ';
unloadSQL long;
unloadCur sys_refcursor;
text long;
begin
select 'select ' || listagg(expr, q'[||'|'||]') within group (order by hdl_meta_tag) || ' as text from ' || unld_table
into unloadSQL
from ( select t.unld_table
, case tc.data_type
when 'DATE' then 'to_char('||c.db_field||',''YYYY/MM/DD'')'
else c.db_field
end as expr
, c.hdl_meta_tag
from gnc_hdl_creation_params t
join gnc_mig_control c
on c.hdl_comp = t.process_component
left join user_tab_columns tc
on tc.table_name = t.unld_table
and tc.column_name = c.db_field
where t.process_component = processComponent
)
group by unld_table;
open unloadCur for unloadSQL;
loop
fetch unloadCur into text;
dbms_output.put_line(text);
exit when unloadCur%notfound;
end loop;
close unloadCur;
end;
Output:
2018/01/01|123|X|Some more text
2018/01/01|123|X|Some more text
Now you just have to make that into a procedure, change dbms_output to utl_file and add your meta tags etc and you're there.
I've assumed there is only one distinct unld_table per process component. If there are more you'll need a loop to work through each one.
For a slightly more generic approach, you could build a cursor-to-csv generator which could encapsulate the datatype handling, and then you'd only need to build the SQL as select [columns] from [table]. You might then write a generic cursor to file processor, where you pass in the filename and a cursor and it does the lot.
Edit: I've updated my cursor-to-csv generator to provide file output, so you just need to pass it a cursor and the file details.

Count the number of null values into an Oracle table?

I need to count the number of null values of all the columns in a table in Oracle.
For instance, I execute the following statements to create a table TEST and insert data.
CREATE TABLE TEST
( A VARCHAR2(20 BYTE),
B VARCHAR2(20 BYTE),
C VARCHAR2(20 BYTE)
);
Insert into TEST (A) values ('a');
Insert into TEST (B) values ('b');
Insert into TEST (C) values ('c');
Now, I write the following code to compute the number of null values in the table TEST:
declare
cnt number :=0;
temp number :=0;
begin
for r in ( select column_name, data_type
from user_tab_columns
where table_name = upper('test')
order by column_id )
loop
if r.data_type <> 'NOT NULL' then
select count(*) into temp FROM TEST where r.column_name IS NULL;
cnt := cnt + temp;
END IF;
end loop;
dbms_output.put_line('Total: '||cnt);
end;
/
It returns 0, when the expected value is 6.
Where is the error?
Thanks in advance.
Counting NULLs for each column
In order to count NULL values for all columns of a table T you could run
SELECT COUNT(*) - COUNT(col1) col1_nulls
, COUNT(*) - COUNT(col2) col2_nulls
,..
, COUNT(*) - COUNT(colN) colN_nulls
, COUNT(*) total_rows
FROM T
/
Where col1, col2, .., colN should be replaced with actual names of columns of T table.
Aggregate functions -like COUNT()- ignore NULL values, so COUNT(*) - COUNT(col) will give you how many nulls for each column.
Summarize all NULLs of a table
If you want to know how many fields are NULL, I mean every NULL of every record you can
WITH d as (
SELECT COUNT(*) - COUNT(col1) col1_nulls
, COUNT(*) - COUNT(col2) col2_nulls
,..
, COUNT(*) - COUNT(colN) colN_nulls
, COUNT(*) total_rows
FROM T
) SELECT col1_nulls + col1_nulls +..+ colN_null
FROM d
/
Summarize all NULLs of a table (using Oracle dictionary tables)
Following is an improvement in which you need to now nothing but table name and it is very easy to code a function based on it
DECLARE
T VARCHAR2(64) := '<YOUR TABLE NAME>';
expr VARCHAR2(32767);
q INTEGER;
BEGIN
SELECT 'SELECT /*+FULL(T) PARALLEL(T)*/' || COUNT(*) || ' * COUNT(*) OVER () - ' || LISTAGG('COUNT(' || COLUMN_NAME || ')', ' + ') WITHIN GROUP (ORDER BY COLUMN_ID) || ' FROM ' || T
INTO expr
FROM USER_TAB_COLUMNS
WHERE TABLE_NAME = T;
-- This line is for debugging purposes only
DBMS_OUTPUT.PUT_LINE(expr);
EXECUTE IMMEDIATE expr INTO q;
DBMS_OUTPUT.PUT_LINE(q);
END;
/
Due to calculation implies a full table scan, code produced in expr variable was optimized for parallel running.
User defined function null_fields
Function version, also includes an optional parameter to be able to run on other schemas.
CREATE OR REPLACE FUNCTION null_fields(table_name IN VARCHAR2, owner IN VARCHAR2 DEFAULT USER)
RETURN INTEGER IS
T VARCHAR2(64) := UPPER(table_name);
o VARCHAR2(64) := UPPER(owner);
expr VARCHAR2(32767);
q INTEGER;
BEGIN
SELECT 'SELECT /*+FULL(T) PARALLEL(T)*/' || COUNT(*) || ' * COUNT(*) OVER () - ' || listagg('COUNT(' || column_name || ')', ' + ') WITHIN GROUP (ORDER BY column_id) || ' FROM ' || o || '.' || T || ' t'
INTO expr
FROM all_tab_columns
WHERE table_name = T;
EXECUTE IMMEDIATE expr INTO q;
RETURN q;
END;
/
-- Usage 1
SELECT null_fields('<your table name>') FROM dual
/
-- Usage 2
SELECT null_fields('<your table name>', '<table owner>') FROM dual
/
Thank you #Lord Peter :
The below PL/SQL script works
declare
cnt number :=0;
temp number :=0;
begin
for r in ( select column_name, nullable
from user_tab_columns
where table_name = upper('test')
order by column_id )
loop
if r.nullable = 'Y' then
EXECUTE IMMEDIATE 'SELECT count(*) FROM test where '|| r.column_name ||' IS NULL' into temp ;
cnt := cnt + temp;
END IF;
end loop;
dbms_output.put_line('Total: '||cnt);
end;
/
The table name test may be replaced the name of table of your interest.
I hope this solution is useful!
The dynamic SQL you execute (this is the string used in EXECUTE IMMEDIATE) should be
select sum(
decode(a,null,1,0)
+decode(b,null,1,0)
+decode(c,null,1,0)
) nullcols
from test;
Where each summand corresponds to a NOT NULL column.
Here only one table scan is necessary to get the result.
Use the data dictionary to find the number of NULL values almost instantly:
select sum(num_nulls) sum_num_nulls
from all_tab_columns
where owner = user
and table_name = 'TEST';
SUM_NUM_NULLS
-------------
6
The values will only be correct if optimizer statistics were gathered recently and if they were gathered with the default value for the sample size.
Those may seem like large caveats but it's worth becoming familiar with your database's statistics gathering process anyway. If your database is not automatically gathering statistics or if your database is not using the default sample size those are likely huge problems you need to be aware of.
To manually gather stats for a specific table a statement like this will work:
begin
dbms_stats.gather_table_stats(user, 'TEST');
end;
/
select COUNT(1) TOTAL from table where COLUMN is NULL;

PostgreSQL: Function with multiple date parameter

I'm trying to create a function with multiple parameter as below:
CREATE OR REPLACE FUNCTION select_name_and_date (
IN f_name character,
IN m_name character,
IN l_name character,
IN start_date date,
IN end_date date )
RETURNS TABLE (
start_date date ,first_name character, middle_name character,last_name character ) AS $BODY$
BEGIN RETURN QUERY
select a.start_date, a.first_name, a.middle_name, a.last_name
FROM table1 a
where code in ('NEW', 'OLD')
and ( (a.first_name like '%' || f_name || '%' and a.middle_name like '%' || m_name || '%' and a.last_name like '%' || l_name || '%'))
or ((a.date_applied) between start_date and end_date );
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
When I tried to execute with date, it shows correct result.
select * from select_name_and_date ('Firstname','','','2016-06-27','2016-06-28');
When i tried to remove the value of date, it shows:
ERROR: invalid input syntax for type date: ""
select * from select_name_and_date ('Firstname','','','','');
When I tried to replace with NULL value of the date, it shows: 0 rows retrieved. (when it should have)
select * from select_name_and_date ('Firstname','','',NULL,NULL);
I want to have parameter that not depending on each parameter.
The between operator does not handle nulls. If you want to allow them, you'll to treat them explicitly. E.g., you could rewrite the part of the condition that applies to a.date_applied as follows:
((a.date_applied BETWEEN start_date AND end_date) OR
(start_date IS NULL AND a.date_applied < end_date) OR
(end_date IS NULL AND a.date_applied >= end_date) OR
(start_date IS NULL AND end_date IS NULL))

Find all tables updated on a specific date

I'm using an Oracle DB, and I'm trying to find all tables that were updated on a certain date. All of the tables that track updates have a column called DT_UPDATE. I've been trying this:
SELECT * FROM
(SELECT TABLE_NAME FROM ALL_TAB_COLUMNS WHERE COLUMN_NAME = 'DT_UPDATE')
WHERE DT_UPDATE = <date>
But get this error:
ORA-00904: "DT_UPDATE": invalid identifier
00904. 00000 - "%s: invalid identifier"
*Cause:
*Action:
Error at Line: 3 Column: 7
I've also tried aliasing the nested Select clause.
As #zaratustra said, you have to use dynamic SQL. You can do something like this:
set serveroutput on
declare
counter number;
begin
for r in (
select owner, table_name
from all_tab_columns
where column_name = 'DT_UPDATE'
) loop
execute immediate 'select count(*) from "'
|| r.owner || '"."' || r.table_name
|| '" where dt_update = :dt and rownum = 1'
into counter
using date '2014-07-07';
if counter = 1 then
dbms_output.put_line(r.table_name);
end if;
end loop;
end;
/
For each table_name (and owner, for completeness) identified in all_tab_columns as having a column called dt_update, a new dynamic select is generated, in the form:
select count(*) from "<owner>"."<table_name>"
where dt_update = date '2014-07-07'
and rownum = 1;
The rownum = 1 filter lets the query execution stop as soon as a matching row is found; since you said you want to know which tables were updated, not how many rows or exactly which rows, if one row matches then that is all you really need to know. So for every table the dynamic query gets either 0 or 1.
For any tables that have at least one row matching the date, this printd the table name using dbms_output, so you have to have that enabled - with set serveroutput on, or with the DBMS_OUTPUT panel in SQL Developer, or your favourite client's equivalent.
If I create some tables with that column, but only populate one with the date I'm looking for:
create table tab1 (dt_update date);
create table tab2 (dt_update date);
create table tab3 (dt_update date);
insert into tab1 values (trunc(sysdate) - 1);
insert into tab2 values (trunc(sysdate));
... then running my anonymous block produces:
anonymous block completed
TAB1
Use your own target date, obviously. This assumes your date field doesn't contain a time component. If it does then you'd need to turn that into a range to cover the whole day.
You could also turn this into a pipelined function that takes a date as an argument; this also handles date fields with time elements:
create or replace function get_updated_tables(p_date date)
return sys.odcivarchar2list pipelined as
counter number;
begin
for r in (
select owner, table_name
from all_tab_columns
where column_name = 'DT_UPDATE'
) loop
execute immediate 'select count(*) from "'
|| r.owner || '"."' || r.table_name
|| '" where dt_update >= :dt1 and dt_update < :dt2'
|| ' and rownum = 1'
into counter
using p_date, p_date + interval '1' day;
if counter = 1 then
pipe row (r.table_name);
end if;
end loop;
end;
/
Then you can query it with:
select column_value from table(get_updated_tables(date '2014-07-07'));
COLUMN_VALUE
------------------------------
TAB1
Dynamic SQL is interesting, as you said in a comment, but should only be used when necessary. The generated statement can't be parsed until it's executed, so you might not spot syntax or other errors until run-time. Also make sure you use bind variables for values (but not object names) to avoid SQL injection.
Let's assume we have three tables with the field dt_update, and each of them has one record (doesn't matter if more):
create table tt1 (
dt_update date
);
insert into tt1 values (sysdate);
create table tt2 (
dt_update date
);
insert into tt2 values (sysdate - 1);
create table tt3 (
dt_update date
);
insert into tt3 values (sysdate - 2);
This PL/SQL anonym block prints only tables' names that have record with the value of the column dt_update more than or equals today:
declare
type table_names_tp is table of user_tables.table_name%type index by binary_integer;
table_names table_names_tp;
l_res number(1);
l_deadline date := to_date('2014-07-08', 'YYYY-MM-DD');
begin
select table_name
BULK COLLECT INTO table_names
from user_tab_columns
where lower(column_name) = 'dt_update'
;
for i in table_names.first..table_names.last
loop
execute immediate 'select count(*) from dual where exists (select null from ' || table_names(i) || ' where dt_update >= :dead_line)'
into l_res
using l_deadline;
if l_res = 1
then
DBMS_OUTPUT.put_line('Table ' || table_names(i) || ' was updated after ' || l_deadline);
end if;
end loop;
end;
You can use this code as an example to start writing your code. Pay carefully attention to protect yourself from SQL injections, DO NOT(!) use concatenation of your values, always use bind variables instead. It also helps you to store a cached query plan in SGA, the application will read data from the SGA area and perform soft parsing.