Oracle: Drop all tablespaces that meet a condition - sql

I want to drop all tablespaces in my DB that have a particular pattern in their datafile names.
The below query gives me all the tablespaces whose datafile names obey this pattern:
SELECT TABLESPACE_NAME FROM DBA_DATA_FILES WHERE FILE_NAME LIKE '/vol1/u06%' ;
I want to drop all the tablespaces returned by the above query. But I'm unable to figure out how the outer query should be, because DROP TABLESPACE doesn't take a WHERE clause.
So, the outer query should look like DROP TABLESPACE tablespace_name..... where the tablespace_name comes one-by-one from the above pattern matching query.
(I'm using Oracle)
Thanks!

Here is what you need. But let me say that I would not recommend though because it can be dangerous to delete tablespaces in a dynamic script like this.
BEGIN
FOR rs in (SELECT TABLESPACE_NAME
FROM DBA_DATA_FILES WHERE FILE_NAME LIKE '/vol1/u06%') LOOP
BEGIN
EXECUTE IMMEDIATE 'DROP TABLESPACE ' || rs.TABLESPACE_NAME || ' INCLUDING CONTENTS AND DATAFILES CASCADE CONSTRAINTS';
END;
END LOOP;
END;

Related

How to execute alter command within select statement in a loop in Oracle?

I am trying to Rebuild Indexes of a schema through a script but I am stucked at a point where I get the string ALTER INDEX OWNER.INDEX_NAME REBUILD NOLOGGING through select statement but I am not getting how to execute the alter command ,please guide :
I tried to assign str the value of select query used in 2nd for loop and then execute it but it gave error .
IS
STR VARCHAR2(5000);
BEGIN
FOR T IN (
SELECT USERNAME FROM DBA_USERS WHERE USERNAME ='REPORT'
)
LOOP
FOR CUR IN
(
SELECT ' ALTER INDEX '||OWNER||'.'||INDEX_NAME|| ' REBUILD NOLOGGING; ' FROM DBA_INDEXES
WHERE OWNER=T.USERNAME AND TEMPORARY='N'
)
LOOP
--- EXECUTE IMMEDIATE STR ;
INSERT INTO INDEX_REBUILD_HISTORY
SELECT DISTINCT OWNER, TRUNC(LAST_DDL_TIME) from DBA_OBJECTS where OBJECT_TYPE = 'INDEX'
AND
OWNER=T.USERNAME ;
COMMIT;
END LOOP;
END LOOP;
END ;
You use dynamic sql. And you don't need your outer loop. The filter on that is available in dba_indexes:
create procedure bld_idx
is
vsql varchar2(500);
for x in (select owner,
index_name
from dba_indexes
where owner = 'REPORT'
and TEMPORARY='N'
)
loop
vsql := ' ALTER INDEX '||x.OWNER||'.'||x.INDEX_NAME|| ' REBUILD NOLOGGING; ';
dbms_output.put_line(vsql); -- debugging only
execute immediate vsql;
end loop;
end;
Note 1: above is off the top of my head. There may be minor syntax issues, but if so you should be able to work them out.
Not 2: Rebuilding indexes is not something that needs to be done in the normal course of things. Richard Foote is probably the foremost authority on the internals of oracle indexes, and he has this to say: https://richardfoote.wordpress.com/2007/12/11/index-internals-rebuilding-the-truth/
"it gave error ." isn't helpful without the actual error you received. That said you've made the same mistake so many others do, you shouldn't include the ";" as part of your dynamic SQL - it's not part of the statement, it's only used by your client to know when to send code to the database.
FOR CUR IN
(
SELECT ' ALTER INDEX '||OWNER||'.'||INDEX_NAME|| ' REBUILD NOLOGGING' ddl_cmd FROM DBA_INDEXES
WHERE OWNER=T.USERNAME AND TEMPORARY='N'
)
...
EXECUTE IMMEDIATE CUR.ddl_cmd ;
(I've also given the column an alias so you can use it in your loop nicely.
Then
INSERT INTO INDEX_REBUILD_HISTORY
SELECT DISTINCT OWNER, TRUNC(LAST_DDL_TIME) from DBA_OBJECTS where OBJECT_TYPE = 'INDEX'
AND
OWNER=T.USERNAME ;
Is not filtering on the index you just rebuilt, it doesn't seem like it's going to get entirely useful information.
That said...
Is it really worth rebuilding all your indexes offline and making them unrecoverable? Probably not, if you're doing this more than once and are benefiting then there's probably something that could be changed with your data model to help. Have a good read of this presentation by Richard Foote, a well established Oracle Indexing Expert https://richardfoote.files.wordpress.com/2007/12/index-internals-rebuilding-the-truth.pdf I doubt you'd come away from it believing that rebuilding all the indexes is a solution.

How to delete every table in a specific schema in postgres?

How do I delete all the tables I have in a specific schema? Only the tables in the schema should be deleted.
I already have all the table names that I fetched with the code below, but how do delete all those tables?
The following is some psycopg2 code, and below that is the SQL generated
writeCon.execute("SELECT table_name FROM information_schema.tables WHERE table_schema='mySchema'")
SELECT table_name FROM information_schema.tables WHERE table_schema='mySchema'
You can use an anonymous code block for that.
WARNING: This code is playing with DROP TABLE statements, and they are really mean if you make a mistake ;) The CASCADE option drops all depending objects as well. Use it with care!
DO $$
DECLARE
row record;
BEGIN
FOR row IN SELECT * FROM pg_tables WHERE schemaname = 'mySchema'
LOOP
EXECUTE 'DROP TABLE mySchema.' || quote_ident(row.tablename) || ' CASCADE';
END LOOP;
END;
$$;
In case you want to drop everything in your schema, including wrappers, sequences, etc., consider dropping the schema itself and creating it again:
DROP SCHEMA mySchema CASCADE;
CREATE SCHEMA mySchema;
For a single-line command, you can use psql and its \gexec functionality:
SELECT format('DROP TABLE %I.%I', table_schema, table_name)
FROM information_schema.tables
WHERE table_schema= 'mySchema';\gexec
That will run the query and execute each result string as SQL command.

How to clean unused space after rows deletion on table with functional indexes?

How to clean table space correctly?
For example, we have a table with several million rows and functional indexes. We want to remove the most part of the table.
For this we called: delete from some_table where ....
What the next steps?
Is this sequence correct?
1. Drop functional indexes.
2. Alter some_table shrink space.
3. Create functional indexes again.
As you probably realized (or you wouldn't be asking about function-based indexes in particular), you are not able to simply:
alter table mytable enable row movement;
alter table mytable shrink space;
alter table mytable disable row movement;
Attempting to do so will result in:
ORA-10631: SHRINK clause should not be specified for this object
(Note: this limitation also applies to bitmap join indexes.)
Clearly, you can drop the FBI first...
drop index my_fbi_index;
alter table mytable enable row movement;
alter table mytable shrink space;
alter table mytable disable row movement;
create index my_fbi_index... online;
That's not an online operation though. Your application(s) will be affected by the missing function-based-index for a short time.
If you need an online operation and you are on Oracle 12.2 or later, you can try this instead:
alter table mytable move online;
(alter table...move (no "online") is available pre-12.2, but it's not an online operation and it will drop your index segments, leaving the indexes marked "unusable" and requiring you to rebuild them. So, not really a good option pre-12.2.)
Best way
Create a new table with only the valid data, and recreate the indexes there, then drop the old table.
Create the new table with the filtered data, where table_name is the name of the table, filter_text is a condition started with 'WHERE ...', and the partitionText is a partitioning clause if you have one for the table, example RANGE (ENDEDAT) INTERVAL ( NUMTODSINTERVAL(1,''day'') ) ( PARTITION p_first VALUES LESS THAN ( TO_DATE(''01-01-2010'',''dd-MM-yyyy'') ) ) ENABLE ROW MOVEMENT
sqlCommand := 'create table ' || table_name ||'_TMP
tablespace &TBS_NORMAL_TABLES initrans 32 ' || partitionText ||'
nologging
AS (SELECT * FROM '||table_name|| ' ' ||filter_text||')';
EXECUTE IMMEDIATE sqlCommand;
Add attributes to the new table.
Such as, constraints, indexes... Those could be collected from built-in tables such as all_constraints, all_indexes. The moving of the attributes could be also automatized, just need to apply some rename hacks.
Change the old and the new table
execute immediate 'ALTER TABLE &Schemaowner..'||v_table_name||' RENAME TO '||v_table_name||'_OT';
execute immediate 'ALTER TABLE &Schemaowner..'||v_table_name||'_TP'||' RENAME TO '||v_table_name;
Drop the old table
execute immediate 'DROP TABLE '||v_table_name||'_OT';
TL;DR informations about the shrinking, and reorg tables
Here comes some informations and useful links about my investigation when considered archiving huge amount of data on live production DBs.
Automated way to shrink some tables, and handle an error on them
for i in (SELECT obj.owner,obj.table_name,(CASE WHEN NVL(idx.cnt, 0) < 1 THEN 'Y' ELSE 'N' END) as shrinkable, row_movement
FROM all_tables obj,
(SELECT table_name, COUNT(rownum) cnt
FROM user_indexes
WHERE index_type LIKE 'FUN%'
GROUP BY table_name) idx
WHERE obj.table_name = idx.table_name(+)
AND obj.owner = &Schemaowner
and obj.table_name like 'T_%' and obj.table_name not like 'TMP_%'
and NVL(idx.cnt,0) < 1)
loop
BEGIN
if i.row_movement='ENABLED' then
execute immediate 'alter table '||i.table_name||' shrink space';
else
execute immediate 'alter table '||i.table_name||' enable row movement';
execute immediate 'alter table '||i.table_name||' shrink space';
execute immediate 'alter table '||i.table_name||' disable row movement';
end if;
DBMS_OUTPUT.PUT_LINE('shrinked table: ' || i.table_name);
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('error while shrinking table: ' || i.table_name);
DBMS_OUTPUT.PUT_LINE (SQLERRM);
DBMS_OUTPUT.PUT_LINE (SQLCODE);
if SQLCODE=-10635 then
for p in (SELECT partition_name ,tablespace_name FROM user_tab_partitions WHERE table_name = 'SOME_PARTITIONED_TABLE')
loop
BEGIN
execute immediate 'alter table '||i.table_name||' MOVE PARTITION ' || p.partition_name || ' ONLINE TABLESPACE ' || p.tablespace_name || ' COMPRESS UPDATE INDEXES';
DBMS_OUTPUT.PUT_LINE('moved partition: ' || p.partition_name);
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('error while moving partition: ' || p.partition_name);
DBMS_OUTPUT.PUT_LINE (SQLERRM);
DBMS_OUTPUT.PUT_LINE (SQLCODE);
CONTINUE;
END;
end loop;
end if;
CONTINUE;
END;
end loop;
USEFUL selects
Shrinkable tables
SELECT obj.owner
,obj.table_name
,(CASE WHEN NVL(idx.cnt, 0) < 1 THEN 'Y' ELSE 'N' END) as shrinkable
FROM all_tables obj,
(SELECT table_name, COUNT(rownum) cnt
FROM user_indexes
WHERE index_type LIKE 'FUN%'
GROUP BY table_name) idx
WHERE obj.table_name = idx.table_name(+)
AND NVL(idx.cnt,0) < 1
and obj.owner='YOUR_SCHEMA_OWNER'
Indexes that makes shrink command inexecutable
SELECT *
FROM all_indexes
WHERE index_type LIKE 'FUN%'
and owner='YOUR_SCHEMA_OWNER'
The shrinkable tables without any compromise
SELECT obj.owner
,obj.table_name
,(CASE WHEN NVL(idx.cnt, 0) < 1 THEN 'Y' ELSE 'N' END) as shrinkable
FROM all_tables obj,
(SELECT table_name, COUNT(rownum) cnt
FROM user_indexes
WHERE index_type LIKE 'FUN%'
GROUP BY table_name) idx
WHERE obj.table_name = idx.table_name(+)
AND NVL(idx.cnt,0) < 1
--and obj.table_name like 'T_%' and obj.table_name not like 'TMP_%'
and obj.compression != 'ENABLED'
and obj.table_name not in (SELECT table_name FROM user_tab_partitions WHERE compression = 'ENABLED')
and obj.owner='YOUR_SCHEMA_OWNER'
Shrink problems, compression of tables and/or partitions
SELECT table_name,compression, compress_for FROM user_tables WHERE compression = 'ENABLED'
SELECT table_name,partition_name, compression, compress_for FROM user_tab_partitions WHERE compression = 'ENABLED' ORDER BY 1
Examine these before/after shrink, test the hell out of it
select segment_name,bytes/1024/1024 as mb,blocks from user_segments where segment_name='TABLE_NAME'
In my case i create a table (not partitioned) with couple million rows, then delete 1/3 of it, here is the results:
|| BYTES || BLOCKS ||
Before deletion || 105250816 || 12848 ||
After deletion || 78774272 || 9616 ||
Investigation and side-effects
Possible side effects
https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:9536157800346457013
...and when should use reorg:
http://www.dba-oracle.com/t_table_fragmentation.htm
...enable row movement while shrink space can reorder the rows (this means if use use ROWID based jobs or selects or something like that, then there could be some surprises)
http://www.dba-oracle.com/t_enable_row_movement.htm
create table newtable
select * from oldtable where...
drop oldtable
rename newtable to oldtable
recreate indexes.
Create a copy of your table
Drop old table
Rename copy
has a disadvantage, during this operation (which may take some time) your application is not available.
So, your approach
Drop functional indexes.
Alter some_table shrink space.
Create functional indexes again.
is correct.
In case you have a partitioned table, see this one: https://dba.stackexchange.com/questions/162415/how-to-shrink-space-on-table-with-a-function-based-index
There is another option I haven't seen anyone propose. Issue the DELETE, then do nothing. WHY do we think we need to rebuild the index? WHY do we think we need to resize the table? If we do nothing after the DELETE, all of the extents for the table remain allocated to the table and WILL be used on future INSERTS. If you have a retail store and conduct a clearance sale, resulting in a lot of empty shelves, do you then rebuild the store to eliminate the 'wasted' space? Or do you simply re-use the empty shelves as new stock comes in? What is to be gained by resizing the table? At best, it will release the space back to the TS, not the OS file system. While there are use cases that argue for resizing the table, it is NOT an automatic given that resizing after a large (even massive) DELETE is necessarily justified.
So again, my answer is to do nothing after the DELETE (well, do a COMMIT, of course!)

Drop all tables in a Redshift schema - without dropping permissions

I would be interested to drop all tables in a Redshift schema. Even though this solution works
DROP SCHEMA public CASCADE;
CREATE SCHEMA public;
is NOT good for me since that it drops SCHEMA permissions as well.
A solution like
DO $$ DECLARE
r RECORD;
BEGIN
-- if the schema you operate on is not "current", you will want to
-- replace current_schema() in query with 'schematodeletetablesfrom'
-- *and* update the generate 'DROP...' accordingly.
FOR r IN (SELECT tablename FROM pg_tables WHERE schemaname = current_schema()) LOOP
EXECUTE 'DROP TABLE IF EXISTS ' || quote_ident(r.tablename) || ' CASCADE';
END LOOP;
END $$;
as reported in this thread How can I drop all the tables in a PostgreSQL database?
would be ideal. Unfortunately it doesn't work on Redshift (apparently there is no support for for loops).
Is there any other solution to achieve it?
Run this SQL and copy+paste the result on your SQL client.
If you want to do it programmatically you need to built little bit code around it.
SELECT 'DROP TABLE IF EXISTS ' || tablename || ' CASCADE;'
FROM pg_tables
WHERE schemaname = '<your_schema>'
I solved it through a procedure that deletes all records. Using this technique to truncate fails but deleting it works fine for my intents and purposes.
create or replace procedure sp_truncate_dwh() as $$
DECLARE
tables RECORD;
BEGIN
FOR tables in SELECT tablename
FROM pg_tables
WHERE schemaname = 'dwh'
order by tablename
LOOP
EXECUTE 'delete from dwh.' || quote_ident(tables.tablename) ;
END LOOP;
RETURN;
END;
$$ LANGUAGE plpgsql;
--call sp_truncate_dwh()
In addition to demircioglu's answer, I had to add Commit after every drop statement to drop all tables in my schema. SELECT 'DROP TABLE IF EXISTS ' || tablename || ' CASCADE; COMMIT;' FROM pg_tables WHERE schemaname = '<your_schema>'
P.S.: I do not have required reputation to add this note as a comment and had to add as an answer.
Using Python and pyscopg2 locally on my PC I came up with this script to delete all tables in schema:
import psycopg2
schema = "schema_to_be_deleted"
try:
conn = psycopg2.connect("dbname='{}' port='{}' host='{}' user='{}' password='{}'".format("DB_NAME", "DB_PORT", "DB_HOST", "DB_USER", "DB_PWD"))
cursor = conn.cursor()
cursor.execute("SELECT tablename FROM pg_tables WHERE schemaname = '%s'" % schema)
rows = cursor.fetchall()
for row in rows:
cursor.execute("DROP TABLE {}.{}".format(schema, row[0]))
cursor.close()
conn.commit()
except psycopg2.DatabaseError as error:
logger.error(error)
finally:
if conn is not None:
conn.close()
Replace correctly values for DB_NAME, DB_PORT, DB_HOST, DB_USER and DB_PWD to connect to the Redshift DB
The following recipe differs from other answers in the regard that it generates one SQL statement for all tables we're going to delete.
SELECT
'DROP TABLE ' ||
LISTAGG("table", ', ') ||
';'
FROM
svv_table_info
WHERE
"table" LIKE 'staging_%';
Example result:
DROP TABLE staging_077815128468462e9de8ca6fec22f284, staging_abc, staging_123;
As in other answers, you will need to copy the generated SQL and execute it separately.
References
|| operator concatenates strings
LISTAGG function concatenates every table name into a string with a separator
The table svv_table_info is used because LISTAGG doesn't want to work with pg_tables for me. Complaint:
One or more of the used functions must be applied on at least one user created tables. Examples of user table only functions are LISTAGG, MEDIAN, PERCENTILE_CONT, etc
UPD. I just now noticed that SVV_TABLE_INFO page says:
The SVV_TABLE_INFO view doesn't return any information for empty tables.
...which means empty tables will not be in the list returned by this query. I usually delete transient tables to save disk space, so this does not bother me much; but in general this factor should be considered.

Clear a big number of tables with template names fast and easy

I have about 30 tables in Oracle. Names of the tables have some template format, for example:
DF_D_AUTO, DF_D_PERSON
and so on. So the first part of a table name is always
DF_D_
I would like to clear all these tables. Of course I can clear them manually one by one.
However I would like to know may be someone knows a good fast way using SQL to clear all such tables in one scope.
You can run a loop on all such tables
BEGIN
for table_names in (select table_name from dba_tables where table_name like 'DF\_D\_%' escape '\')
loop
EXECUTE immediate 'truncate table ' || table_names.table_name;
end loop;
END;
DBA_TABLES - reference
Difference between dba_tables, user_tables, all_tables - reference