Drop and "Create Table as Select" (CTaS) in Oracle - sql

Could you explain some DBA guys advise to not use CTaS then drop table repeatedly lots of time? Whether it impacts to data dictionary in Oracle.

it impacts the undo . Also if database is in archive log mode
lot of archvie logs gets created .
If archive space is filled datbase can crash.

Also in the newer versions of Oracle dropped tables sit in the Recycle Bin unless you do DROP TABLE . PURGE or some such.

Related

Truncation of large table in SQL Server database

I would like to completely clear one table in my SQL Server database.
Unfortunately, the table is large (> 90GB). I am going to use the TRUNCATE statement.
The question is whether I should pay attention to something before?
I am also wondering if it will somehow affect the server's disk space (currently about 110 GB free)?
After all the action, SHRINK DATABASE will probably be necessary.
TRUNCATE TABLE is faster and uses fewer system and transaction log resources
than DELETE with no WHERE clause,
but if you need even faster solution, you can create new version of the table (table1), drop the old table, and rename table1 into table.
R

Teradata Drop Column returns with "no more room"

I am trying to drop a varchar(100) column of a 150 GB table (4.6 billion records). All the data in this column is null. I have 30GB more space in the database.
When I attempt to drop the column, it says "no more room in database XY". Why does such an action needs so much space?
The ALTER TABLE statement needs a temporary storage for the altered version before overwriting the original table. I guess the the table that you are trying to alter occupies at least 1/3 of your total storage size
This could happen for a variety of reasons. It's possible that one of the AMP's in your database are full, this would cause that error even with a minor table alteration.
try running the following SQL to check space
select VProc, CurrentPerm, MaxPerm
from dbc.DiskSpace
where DatabaseName='XY';
also, you should check to see what column your primary index is on in this very large table. if the table is not skewed properly, you could also run into space issues when trying to alter a table or by running a query against it.
For additional suggestions I found a decent article on the kind of things you may want to investigate when the "no more room in database" error occurs - Teradata SQL Tutorial. Some of the suggestions include:
dropping any intermediary work or "sandbox" tables
implementing single value or multi-value compression.
dropping unwanted/unnecessary secondary indexes
removing data in dbc tables like accesslog or dbql tables
remove and archive old tables that are no longer used.

is it possible to ignore some tablespaces when doing physical backup

We have a shell script that perform a physical backup of our oracle database (tar + compress of all our database files). Recently, we created a tablespace containing tables that we dont need to backup its contents.
Is it possible to ignore data files relative to this tablespace and have a valid backup?
PS: we don't want to use RMAN.
I preface my remarks here with a note: this is NOT the normative pattern. Normally, we use RMAN to backup ALL the datafiles in the database. With that said...
Yes, it may be possible to restore and recover the database from a backup with a m ssing datafile. But the recovery will require that the tablespace be dropped when the database is restored.
For the simple case of a dropping a tablespace that contains a single datafile: first restore the database files, then:
STARTUP NOMOUNT;
ALTER DATABASE MOUNT ;
ALTER DATABASE DATAFILE '<complete_path_to_datafile>' OFFLINE DROP ;
ALTER DATABASE OPEN ;
DROP TABLESPACE <tablespace_name> INCLUDING CONTENTS ;
Then, continue with database recovery ( RECOVER DATABASE ; )
Obviously, the tablespace_name you provide in the DROP TABLESPACE command would be the one related to the datafile that is dropped.
Obviously, this wouldn't work for the SYSTEM tablespace. And I wouldn't dare try this on other tablespaces like UNDO, SYSAUX, USERS. And there's different syntax for dropping and adding TEMPORARY TABLESPACES.
I don't know of any "gotchas" with the 'DROP TABLESPACE ... INCLUDING CONTENTS', but consider that objects in other tablespaces could be impacted. (Consider that the dropped tablespace might have indexes for tables in other tablespaces, impacts on foreign key constraints, impacts on stored procedures, and so on.)
And it goes without saying, that you would need to test this type of restore procedure in a test environment before you rely on this technique in production.
Without testing, you would be much better served by using RMAN to backup ALL of the datafiles.
NOTE: I have not done anything like this since Oracle 8, possibly Oracle 7.3 (back when we had to roll our own hotbackup scripts). Since we've started using RMAN, I haven't had a need to test anything like this.
NOTE: The RECOVER DATABASE may need to be run before the ALTER DATABASE OPEN. I think you may get an exception warning about "datafile needing more recovery", like you do when you start the database when a tablespace has been left in BEGIN BACKUP mode...

In Oracle can you create a table that only exists while the database is running?

Is there a way in Oracle to create a table that only exists while the database is running and is only stored in memory? So if the database is restarted I will have to recreate the table?
Edit:
I want the data to persist across sessions. The reason being that the data is expensive to recreate but is also highly sensitive.
Using a temporary table would probably help performance compared to what happens today, but its still not a great solution.
You can create a 100% ephemeral table that is usable for the duration of a session (typically shorter than the duration than the database run time) called a TEMPORARY table. The entire purpose of a table in memory is to make it faster for reading from. You will have to re-populate the table for each session as the table will be forgotten (both structure and data) once the session completes.
No exactly, no.
Oracle has the concept of a "global temporary table". With a global temporary table, you create the table once, as with any other table. The table definition will persist permanently, as with any other table.
The contents of the table, however, will will not be permanent. Depending on how you define it, the contents will persist for either the life of the session (on commit perserve rows) or the life of the transaction (on commit delete rows).
See the documentation for all the details:
http://docs.oracle.com/cd/E11882_01/server.112/e25494/tables003.htm#ADMIN11633
Hope that helps.
You can use Oracle's trigger mechanism to invoke a stored procedure when the database starts up or shuts down.
That way you could have the startup trigger create the table, and the shutdown trigger drop it.
You'd probably also want the startup trigger to handle cases where the table exists and truncate it just in case the server stopped suddenly and the shutdown trigger wasn't called.
Oracle trigger documentation
Using Oracle's Global Temporary Tables, you can create a table in memory and have it delete the data at the end of the transaction, or the end of the session.
If I understand correctly, you have some data that needs to be processed when the database is brought online and left available only as long as the database is online. The only use-case I can think of that would require this is if you're encrypting some data and you want to ensure that the unencrypted data is never written to disk.
If this is actually your use-case, I would recommend forgetting about trying to create your own solution for this and, instead, make use of Oracle's encrypted tablespaces or Transparent Data Encryption.

Keep table downtime to a minimum by renaming old table, then filling a new version?

I have a handful or so of permanent tables that need to be re-built on a nightly basis.
In order to keep these tables "live" for as long as possible, and also to offer the possibility of having a backup of just the previous day's data, another developer vaguely suggested
taking a route similar to this when the nightly build happens:
create a permanent table (a build version; e.g., tbl_build_Client)
re-name the live table (tbl_Client gets re-named to tbl_Client_old)
rename the build version to become the live version (tbl_build_Client gets re-named to tbl_Client)
To rename the tables, sp_rename would be in use.
http://msdn.microsoft.com/en-us/library/ms188351.aspx
Do you see any more efficient ways to go about this,
or any serious pitfalls in the approach? Thanks in advance.
Update
Trying to flush out gbn's answer and recommendation to use synonyms,
would this be a rational approach, or am I getting some part horribly wrong?
Three real tables for "Client":
1. dbo.build_Client
2. dbo.hold_Client
3. dbo.prev_Client
Because "Client" is how other procs reference the "Client" data, the default synonym is
CREATE SYNONYM Client
FOR dbo.hold_Client
Then take these steps to refresh data yet keep un-interrupted access.
(1.a.) TRUNCATE dbo.prev_Client (it had yesterday's data)
(1.b.) INSERT INTO dbo.prev_Client the records from dbo.build_Client, as dbo.build_Client still had yesterday's data
(2.a.) TRUNCATE dbo.build_Client
(2.b.) INSERT INTO dbo.build_Client the new data build from the new data build process
(2.c.) change the synonym
DROP SYNONYM Client
CREATE SYNONYM Client
FOR dbo.build_Client
(3.a.) TRUNCATE dbo.hold_Client
(3.b.) INSERT INTO dbo.hold_Client the records from dbo.build_Client
(3.c.) change the synonym
DROP SYNONYM Client
CREATE SYNONYM Client
FOR dbo.hold_Client
Use indirection to avoid manuipulating tables directly:
Have 3 tables: Client1, Client2, Client3 with all indexes, constraints and triggers etc
Use synonyms to hide the real table eg Client, ClientOld, ClientToLoad
To generate the new table, you truncate/write to "ClientToLoad"
Then you DROP and CREATE the synonyms in a transaction so that
Client -> what was ClientToLoad
ClientOld -> what was Client
ClientToLoad -> what was ClientOld
You can use SELECT base_object_name FROM sys.synonyms WHERE name = 'Client' to work out what the current indirection is
This works on all editions of SQL Server: the other way is "partition switching" which requires enterprise edition
Some things to keep in mind:
Replication - if you use replication, I don't believe you'll be able to easily implement this strategy
Indexes - make sure that any indexes you have on the tables are carried over to your new/old tables as needed
Logging - i don't remember whether or not sp_rename is fully logged, so you may want to test that in case you need to be able to rollback, etc.
Those are the possible drawbacks I can think of off the top of my head. It otherwise seems to be an effective way to handle the situation.
Except of missing step 0. Drop tbl_Client_old if exists solutions seems fine especially if you run it in explicit transaction. There is no backup of any previous data however.
The other solution, without renames and drops, and which I personally would prefer is to:
Copy all rows from tbl_Client to tbl_Client_old;
Truncate tbl_Client.
(Optional) Remove obsolete records from tbl_Client_old.
It's better in a way that you can control how much of the old data you can store in tbl_Client_old. Which solution will be faster depends on how much data is stored in tables and what indices in tables are.
if you use SQL Server 2008, why can't you try to use horisontal partitioning? All data contains in one table, but new and old data contains in separate partitions.