Difference Between Drop And Drop Purge In Oracle - sql

I am using Oracle Database and I am a bit confused about Drop and Purge Commands. In fact for me both does the same thing. Removes the table with schema from database. What is the main difference between these two?
Drop Table Tablename;
Drop Table Tablename Purge;

Normally, a table is moved into the recycle bin (as of Oracle 10g), if it is dropped. However, if the purge modifier is specified as well, the table is unrecoverably (entirely) dropped from the database.

Oracle Database 10g introduces a new feature for dropping tables. When you
drop a table, the database does not immediately release the space associated with the
table. Rather, the database renames the table and places it in a recycle bin, where it can
later be recovered with the FLASHBACK TABLE statement if you find that you
dropped the table in error.
If you want to immediately release the space associated with the table at the time
you issue the DROP TABLE statement, then include the PURGE clause as follows.
DROP TABLE employees PURGE;
Specify PURGE only if you want to drop the table and release the space
associated with it in a single step. If you specify PURGE, then the database does not
place the table and its dependent objects into the recycle bin.
NOTE: You cannot roll back a DROP TABLE statement with the PURGE
clause, and you cannot recover the table if you drop it with the PURGE clause. This
feature was not available in earlier releases.

A link to asktom article: http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:32543538041420
Basically with oracle10, all dropped tables go into recycle bin from where they can be undropped. With purge you basically skip the recycle bin part and you drop the table without an option to undo the action.

SQL>drop table table_name;
This command deletes the table named table_name, but internally it is moved to recycle bin of oracle (which is just similar to deleting any file/folder using Delete key on windows os).
Benefits:
1. We will be able to restore the above deleted table if required.
Drawbacks:
1. Still occupies some amount of memory.
SQL>drop table table_name purge;
This command deletes the table table_name completely from the database (which is similar to deleting any file/folder using Shift + Delete key on windows OS).
The Benefits and Drawbacks will be vice-verse of the above.

The below statement will drop the table and place it into the recycle bin.
DROP TABLE emp_new;
The below statement will drop the table and flush it out from the recycle bin also.
DROP TABLE emp_new PURGE;

It’s clear from #Randy comment, just to add on:
(1) Drop Table Tablename:
Without purge, the table can be in the RECYCLEBIN or USER_RECYCLEBIN; which can be restored using the command FLASHBACK. This is similar to files we delete in our windows desktop, which move to the recycle bin, and can be restored back.
(2) Drop Table Tablename Purge:
If Drop is given along with Purge, its tablespace is released and cannot be restored. (Like Shift+Delete in desktop)
Following example gives practical example:
create table test_client (val_cli integer, status varchar2(10));
drop table test_client ;
select tablespace_name from all_tables where owner = 'test' and table_name = 'TEST_CLIENT';
SELECT * FROM RECYCLEBIN where ORIGINAL_NAME='TEST_CLIENT';
SELECT * FROM USER_RECYCLEBIN where ORIGINAL_NAME='TEST_CLIENT';
FLASHBACK TABLE test_client TO BEFORE DROP;
select tablespace_name from all_tables where owner = 'test' and table_name = 'TEST_CLIENT';
drop table test_client purge;
select tablespace_name from all_tables where owner = 'test' and table_name = 'TEST_CLIENT';

drop table table_name purge;
By using this query what happened:-
Normally, a table is moved into the recycle bin (as of Oracle 10g), if it is dropped.But still occupied the memory in database so to release the the memory for another object we can use the the Purge clause.

Related

Redshift: TRUNCATE TABLE IF EXISTS

It is recommended to use TRUNCATE TABLE instead of DELETE. However, truncate table does not support the IF EXISTS clause. The alternative is to DROP TABLE and recreate but needs DDL. Is there a way to do TRUNCATE TABLE if only the table exists?
You have two options to achieve it:
SQL Procedure/Script
Using IF condition, checking if table exists, then only truncate your table.
With Plain SQL Statements
Use Create table with if not exists in combination with Truncate, this will ensure table always exists & your consecutive SQL statements don't error out & stop.
CREATE TABLE #tobetruncated IF NOT EXISTS
TRUNCATE TABLE #tobetruncated
NOTE: This is not specific to REDSHFIT, mostly applies to all DB unless it supports special functions (like one I know Oracle has TABLE_EXISTS_ACTION). Truncate is like a all or nothing operation, and thats what makes it much better in performance than DELETE.

Add column to huge table in Postgresql without downtime

I have a very small table(about 1 mil rows) and I'm going to drop constraints and add new column. The query below is hang about 5 minutes, had to rollback.
BEGIN WORK;
LOCK TABLE usertable IN SHARE MODE;
ALTER TABLE usertable ALTER COLUMN email DROP NOT NULL;
COMMIT WORK;
Another approach suggested on the similar questions in the internet -
CREATE TABLE new_tbl
(
field1 int,
field2 int,
...
);
INSERT INTO new_tbl(field1, field2, ...)
(
SELECT FROM ... -- use your new logic here to insert the updated data
)
CREATE INDEX -- add your constraints and indexes to new_tbl
DROP TABLE tbl;
ALTER TABLE tbl_new RENAME tbl;
Create new table
Insert records from old table to new one (take less then a second)
Drop old table - this query hangs for about 5 minutes ~. Had to rollback. Does not work for me.
Renamed new created table to old one
Dropping old table simply hangs. However when I try to drop new created table with 1 million rows - it works instantly. Why dropping of old table takes so much time ?
SELECT blocked_locks.pid AS blocked_pid,
blocked_activity.usename AS blocked_user,
blocking_locks.pid AS blocking_pid,
blocking_activity.usename AS blocking_user,
blocked_activity.query AS blocked_statement,
blocking_activity.query AS blocking_statement
FROM pg_catalog.pg_locks blocked_locks
JOIN pg_catalog.pg_stat_activity blocked_activity ON blocked_activity.pid = blocked_locks.pid
JOIN pg_catalog.pg_locks blocking_locks
ON blocking_locks.locktype = blocked_locks.locktype
AND blocking_locks.DATABASE IS NOT DISTINCT FROM blocked_locks.DATABASE
AND blocking_locks.relation IS NOT DISTINCT FROM blocked_locks.relation
AND blocking_locks.page IS NOT DISTINCT FROM blocked_locks.page
AND blocking_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple
AND blocking_locks.virtualxid IS NOT DISTINCT FROM blocked_locks.virtualxid
AND blocking_locks.transactionid IS NOT DISTINCT FROM blocked_locks.transactionid
AND blocking_locks.classid IS NOT DISTINCT FROM blocked_locks.classid
AND blocking_locks.objid IS NOT DISTINCT FROM blocked_locks.objid
AND blocking_locks.objsubid IS NOT DISTINCT FROM blocked_locks.objsubid
AND blocking_locks.pid != blocked_locks.pid
JOIN pg_catalog.pg_stat_activity blocking_activity ON blocking_activity.pid = blocking_locks.pid
WHERE NOT blocked_locks.granted;
I can see a lot of concurrent writes/reads which are waiting for my operation. Since I took lock on the table, I don't really think that the reason why I can't drop old table.
Just run vacuum on old table it did not help.
Why I can't drop old table why it takes so much time compared to dropping recently created table ?
I don't have a lot of experience with PostgreSQL, but my guess is that it keeps a bit to signify when a NULLable column is NULL (as opposed to empty) and when that column is marked as NOT NULL it no longer needs that bit. So, when you change that attribute on a column the system needs to go through the whole table and rearrange the data, moving lots of bits around so that the rows are all correctly structured.
This is much different from a DROP TABLE, which merely needs to mark the disk space as no longer in use and perhaps update a few metadata values.
In short, they're very different actions, so of course they take different amounts of time.
I was not able to drop/rename table original table cause of FK of others tables. Once I dropoed it, approach with renaming table works great

Caching joined tables in SQL Server

My website has a search procedure that runs very slowly. One thing that slows it down is the 8 table join it has to do (It also has a WHERE clause on ~6 search parameters). I've tried to make the query faster using various methods such as adding indexes, but these have not helped.
One Idea I have is to cache the result of the 8 table join. I could create a temporary table of the join, and make the search procedure query this table. I could update the table every 10 minutes or so.
Using pseudo code, I would change my procedure to look like this:
IF CachedTable is NULL or CachedTable is older than 10 minutes
DROP TABLE CachedTable
CREATE TABLE CachedTable as (select * from .....)
ENDIF
Select * from CachedTable Where Name = #SearchName
AND EmailAddress = #SearchEmailAddress
Is this a working strategy? I don't really know what syntax I would need to pull this off, or what I would need to lock to stop things from breaking if two queries happen at the same time.
Also, it might take quite a long time to make a new CachedTable each time, so I thought of trying something like double buffering in computer graphics:
IF CachedTabled is NULL
CREATE TABLE CachedTable as (select * from ...)
ELSE IF CachedTable is older than 10 minutes
-- Somehow do this asynchronously, so that the next time a search comes
-- through the new table is used?
ASYNCHRONOUS (
CREATE TABLE BufferedCachedTable as (select * from ...)
DROP TABLE CachedTable
RENAME TABLE BufferedCachedTable as CachedTable
)
Select * from CachedTable Where Name = #SearchName
AND EmailAddress = #SearchEmailAddress
Does this make any sense? If so, how would I achieve it? If not, what should I do instead? I tried using indexed views, but this resulted in weird errors, so I want something like this that I can have more control over (Also, something I can potentially spin off onto a different server in the future.)
Also, what about indexes and so on for tables created like this?
This may be obvious from the question, but I don't know that much about SQL or the options I have available.
You can use multiple schemas (you should always specify schema!) and play switch-a-roo as I demonstrated in this question. Basically you need two additional schemas (one to hold a copy of the table temporarily, and one to hold the cached copy).
CREATE SCHEMA cache AUTHORIZATION dbo;
CREATE SCHEMA hold AUTHORIZATION dbo;
Now, create a mimic of the table in the cache schema:
SELECT * INTO cache.CachedTable FROM dbo.CachedTable WHERE 1 = 0;
-- then create any indexes etc.
Now when it comes time to refresh the data:
-- step 1:
TRUNCATE TABLE cache.CachedTable;
-- (if you need to maintain FKs you may need to delete)
INSERT INTO cache.CachedTable SELECT ...
-- step 2:
-- this transaction will be almost instantaneous,
-- since it is a metadata operation only:
BEGIN TRANSACTION;
ALTER SCHEMA hold TRANSFER dbo.Cachedtable;
ALTER SCHEMA dbo TRANSFER cache.CachedTable;
ALTER SCHEMA cache TRANSFER hold.CachedTable;
COMMIT TRANSACTION;

Oracle - drop table constraints without dropping tables

I'm doing some bulk migration of a large Oracle database. The first step of this involves renaming a whole load of tables as a preparation for dropping them later (but I need to keep the data in them around for now). Any foreign key constraints on them need to be dropped - they shouldn't be connected to the rest of the database at all. If I were dropping them now I could CASCADE CONSTRAINTS, but rename simply alters the constraints.
Is there a way I can drop all of the constraints that CASCADE CONSTRAINTS would drop without dropping the table itself?
You can do it with dynamic SQL and the data dictionary:
begin
for r in ( select table_name, constraint_name
from user_constraints
where constraint_type = 'R' )
loop
execute immediate 'alter table '|| r.table_name
||' drop constraint '|| r.constraint_name;
end loop;
end;
If the tables are owned by more than one user you'll need to drive from DBA_CONSTRAINTS and include OWNER in the projection and the executed statement. If you want to touch less than all the tables I'm afraid you'll need to specify the list in the WHERE clause, unless there's some pattern to their names.
You can disable/re-enable constraints without dropping them. Take a look at this article.

Multi Rows Deletion from table in SQL Server

How I can Delete 1.5 Millions Rows From SQL Server 2000, And how much time it will take to complete this task.
I dont want to delete all records from table.... I just want to delete all records which are fullfilling WHERE condition.
EDITED from a comment to an answer below.
"I fire the same query i.e. delete from table_name with Where Clause... Is it possible to Disable Indexing at the running Query, becuase Query is going on from past 20 hr.. Also help me out how i can disable Indexing.."
If (and only if) you want to delete all of the records in a table, you can use DROP TABLE or TRUNCATE TABLE.
DELETE removes one record at a time and records an entry in the transaction log for each deleted row.
TRUNCATE TABLE is much faster because it doesn't record the activity in the transaction log. It removes all rows from a table, but the table structure & its columns, constraints, indexes and so on remain. DROP TABLE would remove those.
Use caution if you decide to TRUNCATE. It's irreversible (unless you have a backup).
create a second table, inserting all rows from the first that you don't want deleting.
delete the first table
rename the second table to be the first
(or a variation on the above)
This can often be quicker than doing a delete of selected records from a big table.
You may want to try deleting in batches too. I just tested this on a table I have and the delete operation went from 13 seconds to 3 seconds.
While Exists(Select * From YourTable Where YourCondition = True)
Delete Top (100000)
From YourTable
Where YourCondition = True
I don't think you can use the TOP predicate if you are running SQL2000, but it works with SQL2005 and up. If you are using SQL2000, then you can use this syntax instead:
Set RowCount 100000
While Exists(Select * From YourTable Where YourCondition = True)
Delete
From YourTable
Where YourCondition = True
DELETE FROM table WHERE a=b;
When deleting that many rows you may want to disable the indexes so they don't get updated on every delete. Rewriting the indexes on every deletion will significantly slow down the whole process.
You'll want to disable these indexes before beginning your deletion or else there may be table locks already in place.
--Disable Index
ALTER INDEX [IX_MyIndex] ON MyTable.MyColumn DISABLE
--Enable Index
ALTER INDEX [IX_MyIndex] ON MyTable.MyColumn REBUILD
If you wish to remove all entries in a table you can use TRUNCATE.
Does the table you are deleting from have multiple foreign keys, or cascaded deletes or triggers? All of these will impact performance.
Depending on what you want to do and the transactional integrity, can you delete things in small batches e.g. if you are trying to delete 1.5 million records that is 1 years worth of data, can you do it 1 week at a time?
Delete from table where condition for those 1.5 million rows
The time depends.
On Oracle it is also possible to use
truncate table <table>
Not sure if that is standard SQL or available in SQL Server. It will however clear the whole table - but then it is quicker than "delete from " (it will also conduct a commit).
TRUNCATE will also ignore any referential integrity or triggers on the table. DELETE FROM ... WHERE will respect both. The time will depend on the indexing of your condition columns, your hardware, and any additional system load.
The delete SQL is exactly the same as a normal SQL delete
delete from table where [your condition ]
However if your worried about time then I'll assume your question is a little deeper than this. If your table is has a significant number of non-clustered indexes then in some circumstances it may be faster to drop all these indexes first and rebuild after the delete. This is unusual but in cases where your straightforward delete is vulnerable to timeout issues it may be helpful
CREATE TABLE new_table as select <data you want to keep> from old_table;
index new_table
grant on new table
add constraints on new_table
etc on new_table
drop table old_table
rename new_table to old_table;