How to change Null constraint for all columns ? - sql

I have some big tables (30+ columns) with NOT NULL constraints. I would like to change all those constraints to NULL. To do it for a single column I can use
ALTER TABLE <your table> MODIFY <column name> NULL;
Is there a way to do it for all columns in one request ? Or should I copy/paste this line for all columns (><) ?

Is there a way to do it for all columns in one request ?
Yes. By (ab)using EXECUTE IMMEDIATE in PL/SQL. Loop through all the columns by querying USER_TAB_COLUMNS view.
For example,
FOR i IN
( SELECT * FROM user_tab_columns WHERE table_name = '<TABLE_NAME>' AND NULLABLE='N'
)
LOOP
EXECUTE IMMEDIATE 'ALTER TABLE <TABLE_NAME> MODIFY i.COLUMN_NAME NULL';
END LOOP;
In my opinion, by the time you would write the PL/SQL block, you could do it much quickly by using a good text editor. In pure SQL you just need 30 queries for 30 columns.

For a single table you can issue a single alter table command to set the listed columns to allow null, which is a little more efficient than running one at a time, but you still have to list every column.
alter table ...
modify (
col1 null,
col1 null,
col3 null);
If you were applying not null constraints then this would be more worthwhile, as they require a scan of the table to ensure that no nulls are present, and (I think) an exclusive table lock.

You can query user_tab_cols and combine it with a FOR cursor & EXECUTE IMMEDIATE to modify all not null columns - the PL/SQL block for doing like that would look like so:
DECLARE
v_sql_statement VARCHAR2(2000);
BEGIN
FOR table_recs IN (SELECT table_name, column_name
FROM user_tab_cols
WHERE nullable = 'N') LOOP
v_sql_statement :=
'ALTER TABLE ' || table_recs.table_name || ' MODIFY ' || table_recs.column_name || ' NULL';
EXECUTE IMMEDIATE v_sql_statement;
END LOOP;
END;
If you want to do it for all columns in a database instead of the ones in current schema, you can replace user_tab_cols and put in dba_tab_cols; I'd just run the query in the FOR to ensure that the columns being fetched are indeed the correct ones to be modified

Related

Need to build a query with column names stored in another table

I have a table as shown below. This table will be generated dynamically and I have no prior idea about what value it is going to hold.
------------------------------------------
TABLE_NAME COLUMN_NAME CHAR_LENGTH
------------------------------------------
EMPLOYEE COL1 100
EMPLOYEE COL2 200
EMPLOYEE COL3 300
EMPLOYEE COL4 400
Based on this table, I want to build a query in such a way that it would give me those columns, that contains data having char length greater than CHAR_LENGTH column value.
For example if COL2 contains data having char length 500 (>200), then query would give me COL2.
I don't have any draft code to show my attempt, as I have no idea how would I do this.
I don't think this is possible in pure SQL due to the dynamic nature of your requirement. You'll need some form of PL/SQL.
Assuming you're ok with simply outputting the desired results, here is a PL/SQL block that will get the job done:
declare
wExists number(1);
begin
for rec in (select * from your_dynamic_table)
loop
execute immediate 'select count(*)
from dual
where exists (select null
from ' || rec.table_name || ' t
where length(t.' || rec.column_name || ') > ' || rec.char_length || ')'
into wExists;
if wExists = 1 then
dbms_output.put_line(rec.column_name);
end if;
end loop;
end;
You'll also notice the use of the exists clause to optimize the query, so as not to iterate over the whole table unnecessarily, when possible.
Alternatively, if you want the results to be queryable, you can consider converting the code to a pipelined function.
select column_name
from (
select statement that builds the table output
) A
where char_length<length(column_name)
will that help?
You would need a procedure to achieve the same :
Here I am treating the all_tab_columns table in Oracle, which is a default table with much similar structure as your reference example.(Try select * from all_tab_columns). The structure of all_tab_columns is much like yours, except that you will never find a varchar record whose value has exceeded its data length(obvious database level constraint). Date fields may exceed data length, and do reflect in this procedure's output. I am searching all columns in EMPLOYEES whose size exceeds what is specified.
DECLARE
cursor c is select column_name,data_length,table_name from all_tab_columns where table_name=:Table_name;
V_INDEX_NAME all_tab_columns.column_name%type;
v_data_length all_tab_columns.data_length%type;
V_NUMBER PLS_INTEGER;
v_table_name all_tab_columns.table_name%type;
BEGIN
open c;
LOOP
FETCH c into v_index_name,v_data_length,v_table_name;
EXIT when c%NOTFOUND;
v_number :=0;
execute immediate 'select count(*) from '|| :Table_name ||' where length('||v_index_name||')>'||v_data_length into v_number;
if v_number>1 then
dbms_output.put_line(v_index_name||' has values greater than specified'||' '||V_INDEX_NAME||' '||v_data_length);
end if;
END LOOP;
close c;
END;
/
Replace all_tab_columns and its respective columns with the column name of your table.
DEFECTS : The table name is hardcoded. Trying to make the code generic execute immediate or any other trick. Will achieve soon.
EDIT : Defect fixed.

Collecting the last updates of multiple tables into a single table

I have a problem in that I want my output to be a single table (lets call it Output) with 2 columns: one for the "TableName" and one for the DateTime of the last update (using the scn_to_timestamp(max(ora_rowscn)) command).
I have 100 tables and I want to pull in the last update date/times for all these tables into the Output table.
So I can do this:
insert into Output(TableName)
select table_name
from all_tables;
which will put all the tables I have from my database into the TableName column. But I don't know how to loop through each entry and use the tablename as a variable and pass this into the scn_to_timestamp(ora_rowscn).
I thought I would try something like below:
for counter in Output(TableName) LOOP
insert into Output(UpdateDate)
select scn_to_timestamp(max(ora_rowscn))
from counter;
END LOOP;
Any suggestions?
Thank you
This query is a little bit clumsy as it uses xmlgen to execute dynamic sql in a query, but it might work for you.
select x.*
from all_tables t,
xmltable('/ROWSET/ROW' passing
dbms_xmlgen.getxmltype('select ''' || t.table_name ||
''' tab_name, max(ora_rowscn) as la from ' ||
t.table_name)
COLUMNS tab_name varchar2(30) PATH 'TAB_NAME',
max_scn number PATH 'LA') x
Here is a sqlfiddle demo
You can also use PLSQL and then use execute immediate

How to choose tables on select from all_tables?

I have the following table name template, there are a couple with the same name and a number at the end: fmj.backup_semaforo_geo_THENUMBER, for example:
select * from fmj.backup_semaforo_geo_06391442
select * from fmj.backup_semaforo_geo_06398164
...
Lets say I need to select a column from every table which succeeds with the 'fmj.backup_semaforo_geo_%' filter, I tried this:
SELECT calle --This column is from the backup_semaforo_geo_# tables
FROM (SELECT table_name
FROM all_tables
WHERE owner = 'FMJ' AND table_name LIKE 'BACKUP_SEMAFORO_GEO_%');
But I'm getting the all_tables tables name data:
TABLE_NAME
----------
BACKUP_SEMAFORO_GEO_06391442
BACKUP_SEMAFORO_GEO_06398164
...
How can I achieve that without getting the all_tables output?
Thanks.
Presumably your current query is getting ORA-00904: "CALLE": invalid identifier, because the subquery doesn't have a column called CALLE. You can't provide a table name to a query at runtime like that, unfortunately, and have to resort to dynamic SQL.
Something like this will loop through all the tables and for each one will get all the values of CALLE from each one, which you can then loop through. I've used DBMS_OUTPUT to display them, assuming you're doing this in SQL*Plus or something that can deal with that; but you may want to do something else with them.
set serveroutput on
declare
-- declare a local collection type we can use for bulk collect; use any table
-- that has the column, or if there isn't a stable one use the actual data
-- type, varchar2(30) or whatever is appropriate
type t_values is table of table.calle%type;
-- declare an instance of that type
l_values t_values;
-- declare a cursor to generate the dynamic SQL; where this is done is a
-- matter of taste (can use 'open x for select ...', then fetch, etc.)
-- If you run the query on its own you'll see the individual selects from
-- all the tables
cursor c1 is
select table_name,
'select calle from ' || owner ||'.'|| table_name as query
from all_tables
where owner = 'FMJ'
and table_name like 'BACKUP_SEMAFORO_GEO%'
order by table_name;
begin
-- loop around all the dynamic queries from the cursor
for r1 in c1 loop
-- for each one, execute it as dynamic SQL, with a bulk collect into
-- the collection type created above
execute immediate r1.query bulk collect into l_values;
-- loop around all the elements in the collection, and print each one
for i in 1..l_values.count loop
dbms_output.put_line(r1.table_name ||': ' || l_values(i));
end loop;
end loop;
end;
/
May be a dynamic SQL in a PLSQL program;
for a in (SELECT table_name
FROM all_tables
WHERE owner = 'FMJ' AND table_name LIKE 'BACKUP_SEMAFORO_GEO_%')
LOOP
sql_stmt := ' SELECT calle FROM' || a.table_name;
EXECUTE IMMEDIATE sql_stmt;
...
...
END LOOP;

How to check if a column exists before adding it to an existing table in PL/SQL?

How do I add a simple check before adding a column to a table for an oracle db? I've included the SQL that I'm using to add the column.
ALTER TABLE db.tablename
ADD columnname NVARCHAR2(30);
All the metadata about the columns in Oracle Database is accessible using one of the following views.
user_tab_cols; -- For all tables owned by the user
all_tab_cols ; -- For all tables accessible to the user
dba_tab_cols; -- For all tables in the Database.
So, if you are looking for a column like ADD_TMS in SCOTT.EMP Table and add the column only if it does not exist, the PL/SQL Code would be along these lines..
DECLARE
v_column_exists number := 0;
BEGIN
Select count(*) into v_column_exists
from user_tab_cols
where upper(column_name) = 'ADD_TMS'
and upper(table_name) = 'EMP';
--and owner = 'SCOTT --*might be required if you are using all/dba views
if (v_column_exists = 0) then
execute immediate 'alter table emp add (ADD_TMS date)';
end if;
end;
/
If you are planning to run this as a script (not part of a procedure), the easiest way would be to include the alter command in the script and see the errors at the end of the script, assuming you have no Begin-End for the script..
If you have file1.sql
alter table t1 add col1 date;
alter table t1 add col2 date;
alter table t1 add col3 date;
And col2 is present,when the script is run, the other two columns would be added to the table and the log would show the error saying "col2" already exists, so you should be ok.
Or, you can ignore the error:
declare
column_exists exception;
pragma exception_init (column_exists , -01430);
begin
execute immediate 'ALTER TABLE db.tablename ADD columnname NVARCHAR2(30)';
exception when column_exists then null;
end;
/
Normally, I'd suggest trying the ANSI-92 standard meta tables for something like this but I see now that Oracle doesn't support it.
-- this works against most any other database
SELECT
*
FROM
INFORMATION_SCHEMA.COLUMNS C
INNER JOIN
INFORMATION_SCHEMA.TABLES T
ON T.TABLE_NAME = C.TABLE_NAME
WHERE
C.COLUMN_NAME = 'columnname'
AND T.TABLE_NAME = 'tablename'
Instead, it looks like you need to do something like
-- Oracle specific table/column query
SELECT
*
FROM
ALL_TAB_COLUMNS
WHERE
TABLE_NAME = 'tablename'
AND COLUMN_NAME = 'columnname'
I do apologize in that I don't have an Oracle instance to verify the above. If it does not work, please let me know and I will delete this post.
To check column exists
select column_name as found
from user_tab_cols
where table_name = '__TABLE_NAME__'
and column_name = '__COLUMN_NAME__'
Reference link

How can I find columns which have non-null values?

I have many columns in oracle database and some new are added with values. I like to find out which columns have values other than 0 or null. So I am looking for column names for which some sort of useful values exists at least in one row.
How do I do this?
Update: This sounds very close. How do I modify this to suit my needs?
select column_name, nullable, num_distinct, num_nulls
from all_tab_columns
where table_name = 'SOME_TABLE'
You can query all the columns using the dba_tab_cols view and then see if there are columns which have values other than 0 or null.
create or replace function f_has_null_rows(
i_table_name in dba_tab_cols.table_name%type,
i_column_name in dba_tab_cols.table_name%type
) return number is
v_sql varchar2(200);
v_count number;
begin
v_sql := 'select count(*) from ' || i_table_name ||
' where ' || i_column_name ' || ' is not null and '
|| i_column_name || ' <>0 ';
execute immediate v_sql into v_count;
return v_count;
end;
/
select table_name, column_name from dba_tab_Cols
where f_has_null_rows (table_name, column_name) > 0
If you have synonyms in some schemas, you mighty find some of the tables are repeated. You'll have to change the code to cater to that.
Also, the check "is not equal to zero" might not be valid for columns that are not integers and will give errors if columns are of date datatype. You'll need to add the conditions for those cases. use the Data_type column in dba_tab_cols and add the condition as needed.
Select Column_name
from user_tab_columns
where table_name='EMP' and num_nulls=0;
This finds columns which does not have any values so you can perform any actions to that.
Sorry, I misread the question the first time.
From this post on Oracle's forums
Assuming your stats are up to date:
SELECT t.table_name,
t.column_name
FROM user_tab_columns t
WHERE t.nullable = 'Y'
AND t.num_distinct = 0;
Will return you a list of table names and columns that are null. You might want to add something like:
AND t.table_name = upper('Your_table_name')
in there to limit the results to just your table.
select 'cats' as mycolumname from T
where exists (Select id from T where cats is not null)
union
select 'dogs' as mycolumnname from T
where exists (select id from T where dogs is not null)
# ad nauseam
is how to do it in SQL. EDIT: Different flavors of SQL might let you optimize with LIMIT or TOP 'n' in the subquery. Or maybe they're even smart enough to realize that EXIST() only needs one row and optimize silently/transparently. P.S. Add your test for zero to the subquery.