Create multiple tables with array variable - sql

here is the problem.
I read year numbers from a table and insert them in another one (now static but in the end temp table)
Then I look over those numbers and create tables with them.
BEGIN
INSERT INTO temp_year
( year_column )
(
select extract(year from datum) from datetest
);
FOR counter_id IN ( SELECT * FROM temp_year )
LOOP
EXECUTE IMMEDIATE 'Create table YEAR_' || counter_id || ' (year int, name char(50))'
END LOOP;
END;
/
Temp_year has a column year_column (int) (filled like 2012)
datatest has a date colum with values like 10.02.2012
The result should be a table named YEAR_2012 with columns year int and name char(50)
However this is not working. I quits at the execute immediate part, even when there are years in the temp_year table.
Any ideas??
Thanks in advance.
THeVagabond

Try this one:
EXECUTE IMMEDIATE 'Create table YEAR_' || counter_id.year_column || ' (year integer, name varchar2(50))';
(Do not miss the semicolon at the end)

Related

How to get value printed on Postgres

I have a requirement to translate it to an SQL script.
I am using the information schema to get all the columns of a table and print their distinct count.
I was able to get the count, but not able to print the column name properly,
PFA the below code.
I have to pass the value of the "colum_lbl" to my select clause, if I do so it is giving me an group by error.
So I passed the "colum_lbl" within quotes. now all the values of the result has hardcoded 'colum_lbl' as value, I have to replace it with the original value I read from the for Loop
Any other efficient method for this requirement will be very much appreciated. Thanks in advance
do $$
DECLARE
colum_lbl text;
BEGIN
DROP TABLE IF EXISTS tmp_table;
CREATE TABLE tmp_table
(
colnm varchar(50),
cnt integer
);
FOR colum_lbl IN
SELECT distinct column_name
FROM information_schema.columns
WHERE table_schema = 'cva_aggr'
AND table_name = 'employee' AND column_name in ('empid','empnm')
LOOP
EXECUTE
'Insert into tmp_table
SELECT '' || colum_lbl || '',count(distinct ' || colum_lbl || ')
FROM employee ';
END LOOP;
END; $$

How to return result of many select statements as one custom table

I have a table (let's name it source_tab) where I store list of all database tables that meet some criteria.
tab_name: description:
table1 some_desc1
table2 some_desc2
Now I need to execute a select statement on each of these tables and return a result as a table (I created custom TYPE). However I have a problem - when using bulk collect, only the last select statement is returned. The same issue was with open cursor. Is there any possibility to achieve this goal, another then concatenating all select statements using union all and executing it as one statement? And because I'm the begginer in sql, my second question is, is it ok to use this dynamic sql in terms of sql injection issues? Below is simplified version of my code:
CREATE OR REPLACE FUNCTION my_function RETURN newly_created_table_type IS
ret_tab_type newly_created_table_type;
BEGIN
for r in (select * from source_tab)
loop
execute immediate 'select value1, value2,''' || r.tab_name || ''' from ' || r.tab_name bulk collect into ret_tab_type;
end loop;
return ret_tab_type;
END;
I'm using Oracle 11.
In your case you are trying to populate a collection dynamically and wanted result in a single collection. In your case its not possible to do that in a single loop. Also as mentioned by #OldProgrammer, piperow would be a better solution from performance point. See below demo:
--Tables and Values:
CREATE TABLE SOURCE_TAB(TAB_NAME VARCHAR2(100), DESCRIPTION VARCHAR2(100));
/
SELECT * FROM SOURCE_TAB;
/
INSERT INTO SOURCE_TAB VALUES('table1','some_desc1');
INSERT INTO SOURCE_TAB VALUES('table2','some_desc2');
/
CREATE TABLE TABLE1(COL1 NUMBER, COL2 NUMBER);
/
INSERT INTO TABLE1 VALUES(1,2);
INSERT INTO TABLE1 VALUES(3,4);
INSERT INTO TABLE1 VALUES(5,6);
/
Select * from TABLE1;
/
CREATE TABLE TABLE2(COL1 NUMBER, COL2 NUMBER);
/
INSERT INTO TABLE2 VALUES(7,8);
INSERT INTO TABLE2 VALUES(9,10);
INSERT INTO TABLE2 VALUES(11,12);
/
Select * from TABLE2;
/
--Object Created
--UDT
CREATE OR REPLACE TYPE NEWLY_CREATED_TABLE_TYPE IS OBJECT (
VALUE1 NUMBER,
VALUE2 NUMBER
);
/
--Type of UDT
CREATE OR TYPE NEWLY_CRTD_TYP AS TABLE OF NEWLY_CREATED_TABLE_TYPE;
/
--Function:
--Function
CREATE OR REPLACE FUNCTION MY_FUNCTION
RETURN NEWLY_CRTD_TYP PIPELINED
AS
CURSOR CUR_TAB
IS
SELECT *
FROM SOURCE_TAB;
RET_TAB_TYPE NEWLY_CRTD_TYP;
BEGIN
FOR I IN CUR_TAB
LOOP
--Here i made sure that all the tables have col1 & col2 columns since you are using dynamic sql.
EXECUTE IMMEDIATE 'select NEWLY_CREATED_TABLE_TYPE(COL1, COL2) from '|| I.TAB_NAME
BULK COLLECT INTO RET_TAB_TYPE;
EXIT WHEN CUR_TAB%NOTFOUND;
FOR REC IN 1 .. RET_TAB_TYPE.COUNT
LOOP
PIPE ROW (RET_TAB_TYPE (REC) );
END LOOP;
END LOOP;
RETURN;
END;
/
Output:
SQL> Select * from table(MY_FUNCTION);
VALUE1 VALUE2
---------- ----------
1 2
3 4
5 6
7 8
9 10
11 12
6 rows selected.
May be you can combine all the queries into one using UNION ALL before execution, if the number and type of columns to be retrieved from all the tables are identical.
CREATE OR REPLACE FUNCTION my_function
RETURN newly_created_table_type
IS
ret_tab_type newly_created_table_type;
v_query VARCHAR2 (4000);
BEGIN
SELECT LISTAGG (' select VALUE1,VALUE2 FROM ' || tab_name, ' UNION ALL ')
WITHIN GROUP (ORDER BY tab_name)
INTO v_query
FROM source_tab;
EXECUTE IMMEDIATE v_query BULK COLLECT INTO ret_tab_type;
RETURN ret_tab_type;
END;
You could then use a single select statement to get all the values.
select * FROM TABLE ( my_function );

Count the number of null values into an Oracle table?

I need to count the number of null values of all the columns in a table in Oracle.
For instance, I execute the following statements to create a table TEST and insert data.
CREATE TABLE TEST
( A VARCHAR2(20 BYTE),
B VARCHAR2(20 BYTE),
C VARCHAR2(20 BYTE)
);
Insert into TEST (A) values ('a');
Insert into TEST (B) values ('b');
Insert into TEST (C) values ('c');
Now, I write the following code to compute the number of null values in the table TEST:
declare
cnt number :=0;
temp number :=0;
begin
for r in ( select column_name, data_type
from user_tab_columns
where table_name = upper('test')
order by column_id )
loop
if r.data_type <> 'NOT NULL' then
select count(*) into temp FROM TEST where r.column_name IS NULL;
cnt := cnt + temp;
END IF;
end loop;
dbms_output.put_line('Total: '||cnt);
end;
/
It returns 0, when the expected value is 6.
Where is the error?
Thanks in advance.
Counting NULLs for each column
In order to count NULL values for all columns of a table T you could run
SELECT COUNT(*) - COUNT(col1) col1_nulls
, COUNT(*) - COUNT(col2) col2_nulls
,..
, COUNT(*) - COUNT(colN) colN_nulls
, COUNT(*) total_rows
FROM T
/
Where col1, col2, .., colN should be replaced with actual names of columns of T table.
Aggregate functions -like COUNT()- ignore NULL values, so COUNT(*) - COUNT(col) will give you how many nulls for each column.
Summarize all NULLs of a table
If you want to know how many fields are NULL, I mean every NULL of every record you can
WITH d as (
SELECT COUNT(*) - COUNT(col1) col1_nulls
, COUNT(*) - COUNT(col2) col2_nulls
,..
, COUNT(*) - COUNT(colN) colN_nulls
, COUNT(*) total_rows
FROM T
) SELECT col1_nulls + col1_nulls +..+ colN_null
FROM d
/
Summarize all NULLs of a table (using Oracle dictionary tables)
Following is an improvement in which you need to now nothing but table name and it is very easy to code a function based on it
DECLARE
T VARCHAR2(64) := '<YOUR TABLE NAME>';
expr VARCHAR2(32767);
q INTEGER;
BEGIN
SELECT 'SELECT /*+FULL(T) PARALLEL(T)*/' || COUNT(*) || ' * COUNT(*) OVER () - ' || LISTAGG('COUNT(' || COLUMN_NAME || ')', ' + ') WITHIN GROUP (ORDER BY COLUMN_ID) || ' FROM ' || T
INTO expr
FROM USER_TAB_COLUMNS
WHERE TABLE_NAME = T;
-- This line is for debugging purposes only
DBMS_OUTPUT.PUT_LINE(expr);
EXECUTE IMMEDIATE expr INTO q;
DBMS_OUTPUT.PUT_LINE(q);
END;
/
Due to calculation implies a full table scan, code produced in expr variable was optimized for parallel running.
User defined function null_fields
Function version, also includes an optional parameter to be able to run on other schemas.
CREATE OR REPLACE FUNCTION null_fields(table_name IN VARCHAR2, owner IN VARCHAR2 DEFAULT USER)
RETURN INTEGER IS
T VARCHAR2(64) := UPPER(table_name);
o VARCHAR2(64) := UPPER(owner);
expr VARCHAR2(32767);
q INTEGER;
BEGIN
SELECT 'SELECT /*+FULL(T) PARALLEL(T)*/' || COUNT(*) || ' * COUNT(*) OVER () - ' || listagg('COUNT(' || column_name || ')', ' + ') WITHIN GROUP (ORDER BY column_id) || ' FROM ' || o || '.' || T || ' t'
INTO expr
FROM all_tab_columns
WHERE table_name = T;
EXECUTE IMMEDIATE expr INTO q;
RETURN q;
END;
/
-- Usage 1
SELECT null_fields('<your table name>') FROM dual
/
-- Usage 2
SELECT null_fields('<your table name>', '<table owner>') FROM dual
/
Thank you #Lord Peter :
The below PL/SQL script works
declare
cnt number :=0;
temp number :=0;
begin
for r in ( select column_name, nullable
from user_tab_columns
where table_name = upper('test')
order by column_id )
loop
if r.nullable = 'Y' then
EXECUTE IMMEDIATE 'SELECT count(*) FROM test where '|| r.column_name ||' IS NULL' into temp ;
cnt := cnt + temp;
END IF;
end loop;
dbms_output.put_line('Total: '||cnt);
end;
/
The table name test may be replaced the name of table of your interest.
I hope this solution is useful!
The dynamic SQL you execute (this is the string used in EXECUTE IMMEDIATE) should be
select sum(
decode(a,null,1,0)
+decode(b,null,1,0)
+decode(c,null,1,0)
) nullcols
from test;
Where each summand corresponds to a NOT NULL column.
Here only one table scan is necessary to get the result.
Use the data dictionary to find the number of NULL values almost instantly:
select sum(num_nulls) sum_num_nulls
from all_tab_columns
where owner = user
and table_name = 'TEST';
SUM_NUM_NULLS
-------------
6
The values will only be correct if optimizer statistics were gathered recently and if they were gathered with the default value for the sample size.
Those may seem like large caveats but it's worth becoming familiar with your database's statistics gathering process anyway. If your database is not automatically gathering statistics or if your database is not using the default sample size those are likely huge problems you need to be aware of.
To manually gather stats for a specific table a statement like this will work:
begin
dbms_stats.gather_table_stats(user, 'TEST');
end;
/
select COUNT(1) TOTAL from table where COLUMN is NULL;

How to get data from a Oracle database table except specific columns? [duplicate]

This question already has answers here:
Can you SELECT everything, but 1 or 2 fields, without writer's cramp?
(12 answers)
Closed 6 years ago.
I want to write a query to fetch data from a table except some columns which start name like given in the wildcard criteria as bellow example pseudo code. Is it possible on oracle?
(this can be done as adding column names for the select clause. but assuming there will be new columns will add in future , i want to write a more generic code)
example
Employee(id , name , age, gender)
select *
from table_name
where column_name not like a%
after query it should display a table with
Employee(id , name . gender)
the age column is not there because we are not include in the result
You can try with some dynamic SQL:
declare
vSQL varchar2(32767);
vClob clob;
begin
/* build the query */
select distinct 'select ' || listagg(column_name, ',') within group (order by column_name) over (partition by table_name)|| ' from ' || table_name
into vSQL
from user_tab_columns
where table_name = 'EMPLOYEE'
and column_name not like 'A%';
/* print the query */
dbms_output.put_line(vSQL);
/* build an XML */
select DBMS_XMLGEN.getXML(vSQL)
into vClob
from dual;
dbms_output.put_line(vClob);
/* build a CLOB with all the columns */
vSQL := replace (vSQL, ',', ' || '' | '' || ' );
execute immediate vSQL into vClob;
dbms_output.put_line(vClob);
end;
In this way you can dynamically build a query that extracts all the columns exept those matching a pattern.
After building the query, the question is how to fetch it, given that you don't know in advance what columns you are fetching.
In the example I make an XML and a single row; you can use the query in different ways, depending on your needs.
Duplicate, but I like writing PL, so here is how you could, creating a temp table, then select * from it:
declare
your_table varchar2(40) := 'CHEMIN';
select_to_tmp varchar2(4000) := 'create table ttmp as select ';
begin
-- drop temporary table if exists
begin
execute immediate 'drop table ttmp';
Exception
When others Then
dbms_output.put_line(SQLERRM);
end;
for x in (
select column_name from all_tab_columns
where table_name=your_table
and column_name not in (
-- list columns you want to exclude
'COL_A'
, 'COL_B'
)
)
loop
select_to_tmp := select_to_tmp|| x.column_name ||',';
dbms_output.put_line(x.column_name);
end loop;
-- remove last ','
select_to_tmp := substr(select_to_tmp, 1, length(select_to_tmp) -1);
-- from your table
select_to_tmp := select_to_tmp||' from '||your_table;
-- add conditions if necessary
-- select_to_tmp := select_to_tmp|| ' where rownum < 1 '
dbms_output.put_line(select_to_tmp);
-- then create the temporary table using the query you generated:
execute immediate select_to_tmp;
end;
/
SELECT * FROM ttmp;

Find all tables updated on a specific date

I'm using an Oracle DB, and I'm trying to find all tables that were updated on a certain date. All of the tables that track updates have a column called DT_UPDATE. I've been trying this:
SELECT * FROM
(SELECT TABLE_NAME FROM ALL_TAB_COLUMNS WHERE COLUMN_NAME = 'DT_UPDATE')
WHERE DT_UPDATE = <date>
But get this error:
ORA-00904: "DT_UPDATE": invalid identifier
00904. 00000 - "%s: invalid identifier"
*Cause:
*Action:
Error at Line: 3 Column: 7
I've also tried aliasing the nested Select clause.
As #zaratustra said, you have to use dynamic SQL. You can do something like this:
set serveroutput on
declare
counter number;
begin
for r in (
select owner, table_name
from all_tab_columns
where column_name = 'DT_UPDATE'
) loop
execute immediate 'select count(*) from "'
|| r.owner || '"."' || r.table_name
|| '" where dt_update = :dt and rownum = 1'
into counter
using date '2014-07-07';
if counter = 1 then
dbms_output.put_line(r.table_name);
end if;
end loop;
end;
/
For each table_name (and owner, for completeness) identified in all_tab_columns as having a column called dt_update, a new dynamic select is generated, in the form:
select count(*) from "<owner>"."<table_name>"
where dt_update = date '2014-07-07'
and rownum = 1;
The rownum = 1 filter lets the query execution stop as soon as a matching row is found; since you said you want to know which tables were updated, not how many rows or exactly which rows, if one row matches then that is all you really need to know. So for every table the dynamic query gets either 0 or 1.
For any tables that have at least one row matching the date, this printd the table name using dbms_output, so you have to have that enabled - with set serveroutput on, or with the DBMS_OUTPUT panel in SQL Developer, or your favourite client's equivalent.
If I create some tables with that column, but only populate one with the date I'm looking for:
create table tab1 (dt_update date);
create table tab2 (dt_update date);
create table tab3 (dt_update date);
insert into tab1 values (trunc(sysdate) - 1);
insert into tab2 values (trunc(sysdate));
... then running my anonymous block produces:
anonymous block completed
TAB1
Use your own target date, obviously. This assumes your date field doesn't contain a time component. If it does then you'd need to turn that into a range to cover the whole day.
You could also turn this into a pipelined function that takes a date as an argument; this also handles date fields with time elements:
create or replace function get_updated_tables(p_date date)
return sys.odcivarchar2list pipelined as
counter number;
begin
for r in (
select owner, table_name
from all_tab_columns
where column_name = 'DT_UPDATE'
) loop
execute immediate 'select count(*) from "'
|| r.owner || '"."' || r.table_name
|| '" where dt_update >= :dt1 and dt_update < :dt2'
|| ' and rownum = 1'
into counter
using p_date, p_date + interval '1' day;
if counter = 1 then
pipe row (r.table_name);
end if;
end loop;
end;
/
Then you can query it with:
select column_value from table(get_updated_tables(date '2014-07-07'));
COLUMN_VALUE
------------------------------
TAB1
Dynamic SQL is interesting, as you said in a comment, but should only be used when necessary. The generated statement can't be parsed until it's executed, so you might not spot syntax or other errors until run-time. Also make sure you use bind variables for values (but not object names) to avoid SQL injection.
Let's assume we have three tables with the field dt_update, and each of them has one record (doesn't matter if more):
create table tt1 (
dt_update date
);
insert into tt1 values (sysdate);
create table tt2 (
dt_update date
);
insert into tt2 values (sysdate - 1);
create table tt3 (
dt_update date
);
insert into tt3 values (sysdate - 2);
This PL/SQL anonym block prints only tables' names that have record with the value of the column dt_update more than or equals today:
declare
type table_names_tp is table of user_tables.table_name%type index by binary_integer;
table_names table_names_tp;
l_res number(1);
l_deadline date := to_date('2014-07-08', 'YYYY-MM-DD');
begin
select table_name
BULK COLLECT INTO table_names
from user_tab_columns
where lower(column_name) = 'dt_update'
;
for i in table_names.first..table_names.last
loop
execute immediate 'select count(*) from dual where exists (select null from ' || table_names(i) || ' where dt_update >= :dead_line)'
into l_res
using l_deadline;
if l_res = 1
then
DBMS_OUTPUT.put_line('Table ' || table_names(i) || ' was updated after ' || l_deadline);
end if;
end loop;
end;
You can use this code as an example to start writing your code. Pay carefully attention to protect yourself from SQL injections, DO NOT(!) use concatenation of your values, always use bind variables instead. It also helps you to store a cached query plan in SGA, the application will read data from the SGA area and perform soft parsing.