Keep table downtime to a minimum by renaming old table, then filling a new version? - sql

I have a handful or so of permanent tables that need to be re-built on a nightly basis.
In order to keep these tables "live" for as long as possible, and also to offer the possibility of having a backup of just the previous day's data, another developer vaguely suggested
taking a route similar to this when the nightly build happens:
create a permanent table (a build version; e.g., tbl_build_Client)
re-name the live table (tbl_Client gets re-named to tbl_Client_old)
rename the build version to become the live version (tbl_build_Client gets re-named to tbl_Client)
To rename the tables, sp_rename would be in use.
http://msdn.microsoft.com/en-us/library/ms188351.aspx
Do you see any more efficient ways to go about this,
or any serious pitfalls in the approach? Thanks in advance.
Update
Trying to flush out gbn's answer and recommendation to use synonyms,
would this be a rational approach, or am I getting some part horribly wrong?
Three real tables for "Client":
1. dbo.build_Client
2. dbo.hold_Client
3. dbo.prev_Client
Because "Client" is how other procs reference the "Client" data, the default synonym is
CREATE SYNONYM Client
FOR dbo.hold_Client
Then take these steps to refresh data yet keep un-interrupted access.
(1.a.) TRUNCATE dbo.prev_Client (it had yesterday's data)
(1.b.) INSERT INTO dbo.prev_Client the records from dbo.build_Client, as dbo.build_Client still had yesterday's data
(2.a.) TRUNCATE dbo.build_Client
(2.b.) INSERT INTO dbo.build_Client the new data build from the new data build process
(2.c.) change the synonym
DROP SYNONYM Client
CREATE SYNONYM Client
FOR dbo.build_Client
(3.a.) TRUNCATE dbo.hold_Client
(3.b.) INSERT INTO dbo.hold_Client the records from dbo.build_Client
(3.c.) change the synonym
DROP SYNONYM Client
CREATE SYNONYM Client
FOR dbo.hold_Client

Use indirection to avoid manuipulating tables directly:
Have 3 tables: Client1, Client2, Client3 with all indexes, constraints and triggers etc
Use synonyms to hide the real table eg Client, ClientOld, ClientToLoad
To generate the new table, you truncate/write to "ClientToLoad"
Then you DROP and CREATE the synonyms in a transaction so that
Client -> what was ClientToLoad
ClientOld -> what was Client
ClientToLoad -> what was ClientOld
You can use SELECT base_object_name FROM sys.synonyms WHERE name = 'Client' to work out what the current indirection is
This works on all editions of SQL Server: the other way is "partition switching" which requires enterprise edition

Some things to keep in mind:
Replication - if you use replication, I don't believe you'll be able to easily implement this strategy
Indexes - make sure that any indexes you have on the tables are carried over to your new/old tables as needed
Logging - i don't remember whether or not sp_rename is fully logged, so you may want to test that in case you need to be able to rollback, etc.
Those are the possible drawbacks I can think of off the top of my head. It otherwise seems to be an effective way to handle the situation.

Except of missing step 0. Drop tbl_Client_old if exists solutions seems fine especially if you run it in explicit transaction. There is no backup of any previous data however.
The other solution, without renames and drops, and which I personally would prefer is to:
Copy all rows from tbl_Client to tbl_Client_old;
Truncate tbl_Client.
(Optional) Remove obsolete records from tbl_Client_old.
It's better in a way that you can control how much of the old data you can store in tbl_Client_old. Which solution will be faster depends on how much data is stored in tables and what indices in tables are.

if you use SQL Server 2008, why can't you try to use horisontal partitioning? All data contains in one table, but new and old data contains in separate partitions.

Related

How to safely drop / delete a table?

I need to drop a table but I want to be 100% sure the table is unused first. How can I do so with complete certainty?
I've already:
Made sure there are no references to the table in the codebase
Dropped the table in the staging environment over a week ago
Renamed the table in production (I appended _to_delete at the end) over a week ago
Asked other engineers if table is needed
I suppose I can revoke permissions to the table from the application database user as a next step. What I would love is to be able to record table access to know for sure that table is not being referenced, but I wasn't able to find a way to do that over a specific timeframe.
And yes, I'm realize I'm being a bit paranoid (I could always restore the table from backup if it turns out it's needed) but I'm not a DBA so I'd prefer to be extra cautious.
Create a backup of the table and then drop the table, if application breaks then you always have the option to re-create it with the backup table.
Paranoia is a virtue for a database administrator.
Revoking permissions seems like a good way to proceed.
To check if the table is used, observe the seq_scan and idx_scan columns of the pg_stat_user_tables entry for the table. If these values don't change, the table is not accessed. These values are not 100% accurate, since statistics are deliberately sent via a UDP socket, but if the numbers don't change at all, you can be pretty certain that the table is unused.

Are temporary tables in postgresql visible over all client sessions?

I want to create a temp table so as to be able to join it to a few tables because joining those tables with the content of the proposed temporary table takes a lot of time (fetching the content of the temporary table is time consuming.Repeating it over and over takes more and more time). I am dropping the temporary table when my needs are accomplished.
I want to know if these temporary tables would be visible over other client session(my requirement is to make them visible only for current client session). I am using postgresql. It would be great if you could suggest better alternatives to the solution I am thinking of.
PostgreSQL then is the database for you. Temporary tables done better than the standard. From the docs,
Although the syntax of CREATE TEMPORARY TABLE resembles that of the SQL standard, the effect is not the same. In the standard, temporary tables are defined just once and automatically exist (starting with empty contents) in every session that needs them. PostgreSQL instead requires each session to issue its own CREATE TEMPORARY TABLE command for each temporary table to be used. This allows different sessions to use the same temporary table name for different purposes, whereas the standard's approach constrains all instances of a given temporary table name to have the same table structure.
Pleas read the documentation.
Temporary tables are only visible in the current session and are automatically dropped when the database session ends.
If you specify ON COMMIT, the temporary table will automatically be dropped at the end of the current transaction.
If you need good table statistics on a temporary table, you have to call ANALYZE explicitly, as these statistics are not collected automatically.
By default , temporary tables are visible for current session only and temporary tables are dropped automatically after commit or closing that transaction, so you don't need to drop explicitly.
The auto vacuum daemon cannot access and therefore cannot vacuum or analyze temporary tables. For this reason, appropriate vacuum and analyze operations should be performed via session SQL commands. For example, if a temporary table is going to be used in complex queries, it is wise to run ANALYZE on the temporary table after it is populated.
Optionally, GLOBAL or LOCAL can be written before TEMPORARY or TEMP. This presently makes no difference in PostgreSQL and is deprecated

Backup and Recover Table

Database: Oracle 11g R2
Tool: TOAD for Oracle 10.6
I wanted to take backup of a table. Hence, I used the below query:
CREATE TABLE table_backup AS (
SELECT *
FROM table
);
I would require to make some changes to the table and restore it to the previous version after verifying the changes.
For that, I would DROP the table and restore it from its backup using the above query again.
My question is, when I do it, do all the grants, indexes, partitions, etc. remain in the restored table or not?
Also, is there a better way to achieve my requirement.
The documentation says:
Dropping a table invalidates dependent objects and removes object privileges on the table. If you want to re-create the table, then you must regrant object privileges on the table, re-create the indexes, integrity constraints, and triggers for the table, and respecify its storage parameters.
Noen of the grants, indexes, partition etc. are moved or copied when you do you create table as ... select statement. There is no relationship between the original and copied tables. When you drop the original table all its grants etc. are lost. Renaming the backup table to the original doesn't magically restore them.
Other options include:
export the original table including grants. After making your changes, drop the table and re-import it.
rename your original table, which will retain the grants etc.; then recreate the table with the original name (maybe as a select from the renamed one). When you're done, drop the new table and rename the old one back to its original name. But be careful - don't get carried away and drop the real table by mistake. If your verification needs any of the grants etc. then you'd have to apply those to the new table; indexes would need different names which might complicate this.
duplicate the table in a different schema (e.g. with export/import) and test you changes there, then throw it away. Again being careful about which copy you're working on/dropping. You can duplicate related tables if necessary to maintain integrity.
drop the original table, recreate it, modify it and verify, then drop it again; and use flashback drop to restore the original table. You need to be sure your flashback it set up to support this - so it has to be big enough to hold both dropped tables, for example. Fast but import/export might be safer.
revert your individual changes one by one, which is risky if you're testing the changes - easy to miss something.
You also need to consider any referential constraints (PK/FK) and whether they would be affected by a rename or drop/recreate/export/import.

In Oracle can you create a table that only exists while the database is running?

Is there a way in Oracle to create a table that only exists while the database is running and is only stored in memory? So if the database is restarted I will have to recreate the table?
Edit:
I want the data to persist across sessions. The reason being that the data is expensive to recreate but is also highly sensitive.
Using a temporary table would probably help performance compared to what happens today, but its still not a great solution.
You can create a 100% ephemeral table that is usable for the duration of a session (typically shorter than the duration than the database run time) called a TEMPORARY table. The entire purpose of a table in memory is to make it faster for reading from. You will have to re-populate the table for each session as the table will be forgotten (both structure and data) once the session completes.
No exactly, no.
Oracle has the concept of a "global temporary table". With a global temporary table, you create the table once, as with any other table. The table definition will persist permanently, as with any other table.
The contents of the table, however, will will not be permanent. Depending on how you define it, the contents will persist for either the life of the session (on commit perserve rows) or the life of the transaction (on commit delete rows).
See the documentation for all the details:
http://docs.oracle.com/cd/E11882_01/server.112/e25494/tables003.htm#ADMIN11633
Hope that helps.
You can use Oracle's trigger mechanism to invoke a stored procedure when the database starts up or shuts down.
That way you could have the startup trigger create the table, and the shutdown trigger drop it.
You'd probably also want the startup trigger to handle cases where the table exists and truncate it just in case the server stopped suddenly and the shutdown trigger wasn't called.
Oracle trigger documentation
Using Oracle's Global Temporary Tables, you can create a table in memory and have it delete the data at the end of the transaction, or the end of the session.
If I understand correctly, you have some data that needs to be processed when the database is brought online and left available only as long as the database is online. The only use-case I can think of that would require this is if you're encrypting some data and you want to ensure that the unencrypted data is never written to disk.
If this is actually your use-case, I would recommend forgetting about trying to create your own solution for this and, instead, make use of Oracle's encrypted tablespaces or Transparent Data Encryption.

Need to alter column types in production database (SQL Server 2005)

I need help writing a TSQL script to modify two columns' data type.
We are changing two columns:
uniqueidentifier -> varchar(36) * * * has a primary key constraint
xml -> nvarchar(4000)
My main concern is production deployment of the script...
The table is actively used by a public website that gets thousands of hits per hour. Consequently, we need the script to run quickly, without affecting service on the front end. Also, we need to be able to automatically rollback the transaction if an error occurs.
Fortunately, the table only contains about 25 rows, so I am guessing the update will be quick.
This database is SQL Server 2005.
(FYI - the type changes are required because of a 3rd-party tool which is not compatible with SQL Server's xml and uniqueidentifier types. We've already tested the change in dev and there are no functional issues with the change.)
As David said, execute a script in a production database without doing a backup or stop the site is not the best idea, that said, if you want to do changes in only one table with a reduced number of rows you can prepare a script to :
Begin transaction
create a new table with the final
structure you want.
Copy the data from the original table
to the new table
Rename the old table to, for example,
original_name_old
Rename the new table to
original_table_name
End transaction
This will end with a table that is named as the original one but with the new structure you want, and in addition you maintain the original table with a backup name, so if you want to rollback the change you can create a script to do a simple drop of the new table and rename of the original one.
If the table has foreign keys the script will be a little more complicated, but is still possible without much work.
Consequently, we need the script to
run quickly, without affecting service
on the front end.
This is just an opinion, but it's based on experience: That's a bad idea. It's better to have a short, (pre-announced if possible) scheduled downtime than to take the risk.
The only exception is if you really don't care if the data in these tables gets corrupted, and you can be down for an extended period.
In this situation, based on th types of changes you're making and the testing you've already performed, it sounds like the risk is very minimal, since you've tested the changes and you SHOULD be able to do it safely, but nothing is guaranteed.
First, you need to have a fall-back plan in case something goes wrong. The short version of a MINIMAL reasonable plan would include:
Shut down the website
Make a backup of the database
Run your script
test the DB for integrity
bring the website back online
It would be very unwise to attempt to make such an update while the website is live. you run the risk of being down for an extended period if something goes wrong.
A GOOD plan would also have you testing this against a copy of the database and a copy of the website (a test/staging environment) first and then taking the steps outlined above for the live server update. You have already done this. Kudos to you!
There are even better methods for making such an update, but the trade-off of down time for safety is a no-brainer in most cases.
And if you absolutely need to do this in live then you might consider this:
1) Build an offline version of the table with the new datatypes and copied data.
2) Build all the required keys and indexes on the offline tables.
3) swap the tables out in a transaction. 00 you could rename the old table to something else as an emergency backup.
sp_help 'sp_rename'
But TEST FIRST all of this in a prod like environment. And make sure your backups are up to date. AND do this when you are least busy.