Database: Oracle 11g R2
Tool: TOAD for Oracle 10.6
I wanted to take backup of a table. Hence, I used the below query:
CREATE TABLE table_backup AS (
SELECT *
FROM table
);
I would require to make some changes to the table and restore it to the previous version after verifying the changes.
For that, I would DROP the table and restore it from its backup using the above query again.
My question is, when I do it, do all the grants, indexes, partitions, etc. remain in the restored table or not?
Also, is there a better way to achieve my requirement.
The documentation says:
Dropping a table invalidates dependent objects and removes object privileges on the table. If you want to re-create the table, then you must regrant object privileges on the table, re-create the indexes, integrity constraints, and triggers for the table, and respecify its storage parameters.
Noen of the grants, indexes, partition etc. are moved or copied when you do you create table as ... select statement. There is no relationship between the original and copied tables. When you drop the original table all its grants etc. are lost. Renaming the backup table to the original doesn't magically restore them.
Other options include:
export the original table including grants. After making your changes, drop the table and re-import it.
rename your original table, which will retain the grants etc.; then recreate the table with the original name (maybe as a select from the renamed one). When you're done, drop the new table and rename the old one back to its original name. But be careful - don't get carried away and drop the real table by mistake. If your verification needs any of the grants etc. then you'd have to apply those to the new table; indexes would need different names which might complicate this.
duplicate the table in a different schema (e.g. with export/import) and test you changes there, then throw it away. Again being careful about which copy you're working on/dropping. You can duplicate related tables if necessary to maintain integrity.
drop the original table, recreate it, modify it and verify, then drop it again; and use flashback drop to restore the original table. You need to be sure your flashback it set up to support this - so it has to be big enough to hold both dropped tables, for example. Fast but import/export might be safer.
revert your individual changes one by one, which is risky if you're testing the changes - easy to miss something.
You also need to consider any referential constraints (PK/FK) and whether they would be affected by a rename or drop/recreate/export/import.
Related
I need to drop a table but I want to be 100% sure the table is unused first. How can I do so with complete certainty?
I've already:
Made sure there are no references to the table in the codebase
Dropped the table in the staging environment over a week ago
Renamed the table in production (I appended _to_delete at the end) over a week ago
Asked other engineers if table is needed
I suppose I can revoke permissions to the table from the application database user as a next step. What I would love is to be able to record table access to know for sure that table is not being referenced, but I wasn't able to find a way to do that over a specific timeframe.
And yes, I'm realize I'm being a bit paranoid (I could always restore the table from backup if it turns out it's needed) but I'm not a DBA so I'd prefer to be extra cautious.
Create a backup of the table and then drop the table, if application breaks then you always have the option to re-create it with the backup table.
Paranoia is a virtue for a database administrator.
Revoking permissions seems like a good way to proceed.
To check if the table is used, observe the seq_scan and idx_scan columns of the pg_stat_user_tables entry for the table. If these values don't change, the table is not accessed. These values are not 100% accurate, since statistics are deliberately sent via a UDP socket, but if the numbers don't change at all, you can be pretty certain that the table is unused.
I want to create a temp table so as to be able to join it to a few tables because joining those tables with the content of the proposed temporary table takes a lot of time (fetching the content of the temporary table is time consuming.Repeating it over and over takes more and more time). I am dropping the temporary table when my needs are accomplished.
I want to know if these temporary tables would be visible over other client session(my requirement is to make them visible only for current client session). I am using postgresql. It would be great if you could suggest better alternatives to the solution I am thinking of.
PostgreSQL then is the database for you. Temporary tables done better than the standard. From the docs,
Although the syntax of CREATE TEMPORARY TABLE resembles that of the SQL standard, the effect is not the same. In the standard, temporary tables are defined just once and automatically exist (starting with empty contents) in every session that needs them. PostgreSQL instead requires each session to issue its own CREATE TEMPORARY TABLE command for each temporary table to be used. This allows different sessions to use the same temporary table name for different purposes, whereas the standard's approach constrains all instances of a given temporary table name to have the same table structure.
Pleas read the documentation.
Temporary tables are only visible in the current session and are automatically dropped when the database session ends.
If you specify ON COMMIT, the temporary table will automatically be dropped at the end of the current transaction.
If you need good table statistics on a temporary table, you have to call ANALYZE explicitly, as these statistics are not collected automatically.
By default , temporary tables are visible for current session only and temporary tables are dropped automatically after commit or closing that transaction, so you don't need to drop explicitly.
The auto vacuum daemon cannot access and therefore cannot vacuum or analyze temporary tables. For this reason, appropriate vacuum and analyze operations should be performed via session SQL commands. For example, if a temporary table is going to be used in complex queries, it is wise to run ANALYZE on the temporary table after it is populated.
Optionally, GLOBAL or LOCAL can be written before TEMPORARY or TEMP. This presently makes no difference in PostgreSQL and is deprecated
I Have 2 schema in my database. One of the schema has a package which is using a another schema's table. The Table from another schema, which is frequently drop and create with same name after a time span.
So, when the table has drop at the same moment the package from another schema will be invalid due the table.
Is there any way to provide a grant access will remain as it is after the table drop or any way to recompile automatically that package. Please help me on this.
Thanks in Advance.
Unfortunately there isn't any automatic task to do this. you should write code to automate this. please check http://psoug.org/reference/ddl_trigger.html. it will help you.
Packages will always become invalid if one of their dependant objects is dropped. Recompiling the package while the table does not exist will not work, the package will still be be invalid.
Consider deleting all rows from the table (or truncating it) instead of dropping and recreating it.
Otherwise, try to minimise the time between dropping and recreating the table. Make sure all relevant grants and synonyms are created at the same time as the table.
The only workaround I can think of is to create a copy of the table before you drop it. If you use a synonym to reference the table in the package, you could "trick" the package into using this copy of the table while you drop the actual one. The package would still become invalid while you change the synonym, but this would only be for a short amount of time.
The only ideal solution to this problem is to stop dropping the table.
I have a scenario where I have a central server and a node. Both server and node are capable of running PostgreSQL but the storage space on the node is limited. The node collects data at a high speed and writes the data to its local DB.
The server needs to replicate the data from the node. I plan on accomplishing this with Slony-I or Bucardo.
The node needs to be able to delete all records from its tables at a set interval in order to minimize disk space used. Should I use pgAgent with a job consisting of a script like
DELETE FROM tablex, tabley, tablez;
where the actual batch file to run the script would be something like
#echo off
C:\Progra~1\PostgreSQL\9.1\bin\psql -d database -h localhost -p 5432 -U postgres -f C:\deleteFrom.sql
?
I'm just looking for opinions if this is the best way to accomplish this task or if anyone knows of a more efficient way to pull data from a remote DB and clear that remote DB to save space on the remote node. Thanks for your time.
The most efficient command for you is the TRUNCATE command.
With TRUNCATE, you can chain up tables, like your example:
TRUNCATE tablex, tabley, tablez;
Here's the description from the postgres docs:
TRUNCATE quickly removes all rows from a set of tables. It has the same effect as an unqualified DELETE on each table, but since it does not actually scan the tables it is faster. Furthermore, it reclaims disk space immediately, rather than requiring a subsequent VACUUM operation. This is most useful on large tables.
You may also add CASCADE as a parameter:
CASCADE Automatically truncate all tables that have foreign-key references to any of the named tables, or to any tables added to the group due to CASCADE.
The two best options, depending on your exact needs and workflow, would be truncate, as #Bohemian suggested, or to create a new table, rename, then drop.
We use something much like the latter create/rename/drop method in one of our major projects. This has an advantage where you need to be able to delete some data, but not all data, from a table very quickly. The basic workflow is:
Create a new table with a schema identical to the old one
CREATE new_table LIKE ...
In a transaction, rename the old and new tables simultaneously:
BEGIN;
RENAME table TO old_table;
RENAME new_table TO table;
COMMIT;
[Optional] Now you can do stuff with the old table, while the new table is happily accepting new inserts. You can dump the data to your centralized server, run queries on it, or whatever.
Delete the old table
DROP old_table;
This is an especially useful strategy when you want to keep, say, 7 days of data around, and only discard the 8th day's data all at once. Doing a DELETE in this case can be very slow. By storing the data in partitions (one for each day), it is easy to drop an entire day's data at once.
I have a handful or so of permanent tables that need to be re-built on a nightly basis.
In order to keep these tables "live" for as long as possible, and also to offer the possibility of having a backup of just the previous day's data, another developer vaguely suggested
taking a route similar to this when the nightly build happens:
create a permanent table (a build version; e.g., tbl_build_Client)
re-name the live table (tbl_Client gets re-named to tbl_Client_old)
rename the build version to become the live version (tbl_build_Client gets re-named to tbl_Client)
To rename the tables, sp_rename would be in use.
http://msdn.microsoft.com/en-us/library/ms188351.aspx
Do you see any more efficient ways to go about this,
or any serious pitfalls in the approach? Thanks in advance.
Update
Trying to flush out gbn's answer and recommendation to use synonyms,
would this be a rational approach, or am I getting some part horribly wrong?
Three real tables for "Client":
1. dbo.build_Client
2. dbo.hold_Client
3. dbo.prev_Client
Because "Client" is how other procs reference the "Client" data, the default synonym is
CREATE SYNONYM Client
FOR dbo.hold_Client
Then take these steps to refresh data yet keep un-interrupted access.
(1.a.) TRUNCATE dbo.prev_Client (it had yesterday's data)
(1.b.) INSERT INTO dbo.prev_Client the records from dbo.build_Client, as dbo.build_Client still had yesterday's data
(2.a.) TRUNCATE dbo.build_Client
(2.b.) INSERT INTO dbo.build_Client the new data build from the new data build process
(2.c.) change the synonym
DROP SYNONYM Client
CREATE SYNONYM Client
FOR dbo.build_Client
(3.a.) TRUNCATE dbo.hold_Client
(3.b.) INSERT INTO dbo.hold_Client the records from dbo.build_Client
(3.c.) change the synonym
DROP SYNONYM Client
CREATE SYNONYM Client
FOR dbo.hold_Client
Use indirection to avoid manuipulating tables directly:
Have 3 tables: Client1, Client2, Client3 with all indexes, constraints and triggers etc
Use synonyms to hide the real table eg Client, ClientOld, ClientToLoad
To generate the new table, you truncate/write to "ClientToLoad"
Then you DROP and CREATE the synonyms in a transaction so that
Client -> what was ClientToLoad
ClientOld -> what was Client
ClientToLoad -> what was ClientOld
You can use SELECT base_object_name FROM sys.synonyms WHERE name = 'Client' to work out what the current indirection is
This works on all editions of SQL Server: the other way is "partition switching" which requires enterprise edition
Some things to keep in mind:
Replication - if you use replication, I don't believe you'll be able to easily implement this strategy
Indexes - make sure that any indexes you have on the tables are carried over to your new/old tables as needed
Logging - i don't remember whether or not sp_rename is fully logged, so you may want to test that in case you need to be able to rollback, etc.
Those are the possible drawbacks I can think of off the top of my head. It otherwise seems to be an effective way to handle the situation.
Except of missing step 0. Drop tbl_Client_old if exists solutions seems fine especially if you run it in explicit transaction. There is no backup of any previous data however.
The other solution, without renames and drops, and which I personally would prefer is to:
Copy all rows from tbl_Client to tbl_Client_old;
Truncate tbl_Client.
(Optional) Remove obsolete records from tbl_Client_old.
It's better in a way that you can control how much of the old data you can store in tbl_Client_old. Which solution will be faster depends on how much data is stored in tables and what indices in tables are.
if you use SQL Server 2008, why can't you try to use horisontal partitioning? All data contains in one table, but new and old data contains in separate partitions.