SQL Create or Replace Table in Oracle - sql

We have a oracle database and we have been running into problems with our build and install procedures where when we update the table schema (add, modify columns, triggers, etc) it doesn't always get deployed to all the instances.
Right now we handle schema updates by putting notes on the install steps for the build to run alter table commands, etc. But these always assume you are going from the last build (i.e. build 3 is installed and we are going to 4). If 1 is installed, there might be alter scripts going from 1 to 2, then 2 to 3, then 3 to 4. So this is a giant pain of a manual process that we often mess up and miss an altar.
Is there a easy way to do a "create or replace" on a table without dropping it and losing data? Essentially we want to compare the current table to what it should be and update it. We do not want to backup the table, drop it, create it, and then restore it.

"Essentially we want to compare the current table to what it should be and update it"
Assuming you have a good source version that you want to use to update the other instances, you can Toad's schema compare (you need the DBA Admin module or Toad Xpert Edition) and generate the scripts needed to update a single table, a set of tables, or whatever list of objects you choose.
I would say that the scripts should still be checked/verified before running against the target instance. Some changes may be best handled in a different way (rename a column vs drop/create for example). So be careful.
One more note that others will probably bring up is that this problem shows definite holes in your company's change management process (which is a much bigger topic than this question).

Related

Why does a temp table work but not a permanent table?

I've written a SQL query for a report that creates a permanent table and then performs a bunch of inserts and updates to get all the data, according to company policy. It runs fine in SQL Server Management Studio and in Crystal Reports 2008 on my machine. However, when I schedule it to run on the server with SAP BusinessObjects Central Management Console, it fails with the error "Associated statement not prepared."
I have found that changing this permanent table to be a temp table makes the query work. Why would this be?
Some research shows that this error is sometimes sent instead of the true error. Other people reporting it talk of foreign key and (I would also assume) duplicate key errors.
Things I would check:
Does your permanent table have any unique constraints that might be violated? Or any foreign key constraints?
Are you creating indexes on the table after it has been created?
Are you creating any views over this permanent table?
What happens if the table already exists before the job is run?
What happens to the table if the job fails?
Are there any intermediate steps (such as within a stored procedure) that might involve additional temp or permanent tables?
ETA: Also check what schema the permanent table belongs to: is it usually created with "dbo"? Are you specifying that explicitly? Is there any chance that there might be a permissions problem?
That is often a generic error. Are you able to run it on the server as the account that it is scheduled to run as? It is most likely a permission error or constraint issue.
Assuming you really need a regular table, why it's not possible to create the permanent table once, vs creating it every time you run the query?
Recreating regular user table each time query runs does not seem right. But to make it work you may try to recreate the table in a separate batch or query (e.g. put GO in the script, that splits it into separate queries).
Regarding why it happens, I'm thinking about statement caching. Server compiles the query and stores the result for some time in case same query has to run again. So it's my speculation that it tries to run the compiled query which refers to the table you have already dropped and recreated under the same name. Name is the same, but physically it's a new table. You could hit some bug in the server this way. Just a speculation, it can be different kind of problem.
Without seeing code it's a guess, but being that you are creating a permanent table everytime you run the report, I assume you must be dropping the table at some point? (Or you'd have a LOT of tables building up over time.)
I suggest a couple angles to consider:
1) Make certain to prefix tables (perhaps by a session ID or soemthing) if you are concerned about concurrency/locking issues and the like so each report run has a table exclusive to itself.
2) If you are dropping the table at the end, instead adjust your logic to leave the table be. Write code that drops when you (re)start the operation. It's possible the report is clinging to the table and you are destroying it prematurely.

Keep table downtime to a minimum by renaming old table, then filling a new version?

I have a handful or so of permanent tables that need to be re-built on a nightly basis.
In order to keep these tables "live" for as long as possible, and also to offer the possibility of having a backup of just the previous day's data, another developer vaguely suggested
taking a route similar to this when the nightly build happens:
create a permanent table (a build version; e.g., tbl_build_Client)
re-name the live table (tbl_Client gets re-named to tbl_Client_old)
rename the build version to become the live version (tbl_build_Client gets re-named to tbl_Client)
To rename the tables, sp_rename would be in use.
http://msdn.microsoft.com/en-us/library/ms188351.aspx
Do you see any more efficient ways to go about this,
or any serious pitfalls in the approach? Thanks in advance.
Update
Trying to flush out gbn's answer and recommendation to use synonyms,
would this be a rational approach, or am I getting some part horribly wrong?
Three real tables for "Client":
1. dbo.build_Client
2. dbo.hold_Client
3. dbo.prev_Client
Because "Client" is how other procs reference the "Client" data, the default synonym is
CREATE SYNONYM Client
FOR dbo.hold_Client
Then take these steps to refresh data yet keep un-interrupted access.
(1.a.) TRUNCATE dbo.prev_Client (it had yesterday's data)
(1.b.) INSERT INTO dbo.prev_Client the records from dbo.build_Client, as dbo.build_Client still had yesterday's data
(2.a.) TRUNCATE dbo.build_Client
(2.b.) INSERT INTO dbo.build_Client the new data build from the new data build process
(2.c.) change the synonym
DROP SYNONYM Client
CREATE SYNONYM Client
FOR dbo.build_Client
(3.a.) TRUNCATE dbo.hold_Client
(3.b.) INSERT INTO dbo.hold_Client the records from dbo.build_Client
(3.c.) change the synonym
DROP SYNONYM Client
CREATE SYNONYM Client
FOR dbo.hold_Client
Use indirection to avoid manuipulating tables directly:
Have 3 tables: Client1, Client2, Client3 with all indexes, constraints and triggers etc
Use synonyms to hide the real table eg Client, ClientOld, ClientToLoad
To generate the new table, you truncate/write to "ClientToLoad"
Then you DROP and CREATE the synonyms in a transaction so that
Client -> what was ClientToLoad
ClientOld -> what was Client
ClientToLoad -> what was ClientOld
You can use SELECT base_object_name FROM sys.synonyms WHERE name = 'Client' to work out what the current indirection is
This works on all editions of SQL Server: the other way is "partition switching" which requires enterprise edition
Some things to keep in mind:
Replication - if you use replication, I don't believe you'll be able to easily implement this strategy
Indexes - make sure that any indexes you have on the tables are carried over to your new/old tables as needed
Logging - i don't remember whether or not sp_rename is fully logged, so you may want to test that in case you need to be able to rollback, etc.
Those are the possible drawbacks I can think of off the top of my head. It otherwise seems to be an effective way to handle the situation.
Except of missing step 0. Drop tbl_Client_old if exists solutions seems fine especially if you run it in explicit transaction. There is no backup of any previous data however.
The other solution, without renames and drops, and which I personally would prefer is to:
Copy all rows from tbl_Client to tbl_Client_old;
Truncate tbl_Client.
(Optional) Remove obsolete records from tbl_Client_old.
It's better in a way that you can control how much of the old data you can store in tbl_Client_old. Which solution will be faster depends on how much data is stored in tables and what indices in tables are.
if you use SQL Server 2008, why can't you try to use horisontal partitioning? All data contains in one table, but new and old data contains in separate partitions.

Need to alter column types in production database (SQL Server 2005)

I need help writing a TSQL script to modify two columns' data type.
We are changing two columns:
uniqueidentifier -> varchar(36) * * * has a primary key constraint
xml -> nvarchar(4000)
My main concern is production deployment of the script...
The table is actively used by a public website that gets thousands of hits per hour. Consequently, we need the script to run quickly, without affecting service on the front end. Also, we need to be able to automatically rollback the transaction if an error occurs.
Fortunately, the table only contains about 25 rows, so I am guessing the update will be quick.
This database is SQL Server 2005.
(FYI - the type changes are required because of a 3rd-party tool which is not compatible with SQL Server's xml and uniqueidentifier types. We've already tested the change in dev and there are no functional issues with the change.)
As David said, execute a script in a production database without doing a backup or stop the site is not the best idea, that said, if you want to do changes in only one table with a reduced number of rows you can prepare a script to :
Begin transaction
create a new table with the final
structure you want.
Copy the data from the original table
to the new table
Rename the old table to, for example,
original_name_old
Rename the new table to
original_table_name
End transaction
This will end with a table that is named as the original one but with the new structure you want, and in addition you maintain the original table with a backup name, so if you want to rollback the change you can create a script to do a simple drop of the new table and rename of the original one.
If the table has foreign keys the script will be a little more complicated, but is still possible without much work.
Consequently, we need the script to
run quickly, without affecting service
on the front end.
This is just an opinion, but it's based on experience: That's a bad idea. It's better to have a short, (pre-announced if possible) scheduled downtime than to take the risk.
The only exception is if you really don't care if the data in these tables gets corrupted, and you can be down for an extended period.
In this situation, based on th types of changes you're making and the testing you've already performed, it sounds like the risk is very minimal, since you've tested the changes and you SHOULD be able to do it safely, but nothing is guaranteed.
First, you need to have a fall-back plan in case something goes wrong. The short version of a MINIMAL reasonable plan would include:
Shut down the website
Make a backup of the database
Run your script
test the DB for integrity
bring the website back online
It would be very unwise to attempt to make such an update while the website is live. you run the risk of being down for an extended period if something goes wrong.
A GOOD plan would also have you testing this against a copy of the database and a copy of the website (a test/staging environment) first and then taking the steps outlined above for the live server update. You have already done this. Kudos to you!
There are even better methods for making such an update, but the trade-off of down time for safety is a no-brainer in most cases.
And if you absolutely need to do this in live then you might consider this:
1) Build an offline version of the table with the new datatypes and copied data.
2) Build all the required keys and indexes on the offline tables.
3) swap the tables out in a transaction. 00 you could rename the old table to something else as an emergency backup.
sp_help 'sp_rename'
But TEST FIRST all of this in a prod like environment. And make sure your backups are up to date. AND do this when you are least busy.

Cannot modify table ( using microsoft sql server management studio 2008 )

I create 2 tables and another 1 with foreign keys to the other two.
I realized I want to make some changes to table no 3.
I try to update a field but I get an error "Saving changes is not permitted. The changes you have made require the following table to be dropped and re-created."
I delete those 2 relationships but when I look at dependencies I see my table still depends on those 2 and I still cannot make any change to it.
What can I do?
You can also enable saving changes that require dropping of tables by going to "tools->options->designers->Table and database designers" and unchecking "Prevent saving changes that require table re-creation"
Be careful with this though, sometimes it'll drop a table without being able to recreate it, which makes you lose all data that was in the table.
When using Microsoft SQL Server Management Studio 2012, the same message occurs.
I used the script feature to do modifications which can be seen as a rather good workaround if you wanna use the designer only within a "safe" mode.
Especially the GUI related to create a foreign key is not the best in my opinion. When using a script (alter table) for adding a fk, you are faster than using this GUI feature.
When adding/writing a 'not' in prior to null, that's not a hard issue. (Removing an 'Allow Nulls' for a column refers to "Saving changes is not permitted" when using the designer.)

How to drop all triggers in a Firebird 1.5 database

For debug purposes I need to send 1 table of an existing Firebird 1.5 database to someone.
In stead of sending the whole db , I want to send just the db with just this table - no triggers, no constraints. I can't copy the data to another db because it's just that that we want to check - why this one table is given troubles.
I am just wondering if there is a way to drop all triggers , all constraints and all but one table (using some clever trick with the system tables or so ) ?
Using GUI tool (I personally prefer IBExpert) execute following command:
select 'DROP TRIGGER ' || rdb$trigger_name || ';' from rdb$triggers
where (rdb$system_flag = 0 or rdb$system_flag is null)
Copy result into clipboard then paste and execute within script executive
window.
If your database backup can switch to Firebird 2.1 there is some switch in gbak and isql.
Some Firebird command-line tools have
been supplied with new switches to
suppress the automatic firing of
database triggers:
gbak -nodbtriggers
isql -nodbtriggers
nbackup -T
These switches can only be used by the
database owner and SYSDBA.
You can drop all triggers by directly deleting them from the system table, like so:
delete from rdb$triggers
where (rdb$system_flag = 0 or rdb$system_flag is null);
Note that the normal way of using drop trigger is certainly preferable, but it can be done.
You can also drop constraints by executing DDL statements, but to enumerate constraints and drop them in a SQL script you would need the execute block functionality that Firebird 1.5 doesn't have.
There are similar statements to delete other database objects, but actually running these successfully may be much more difficult because of dependencies between objects. You can't drop any object as long as another object depends on it. This can become really tricky due to circular references, where two (or even more) objects depend on one another, forming a cycle, so there isn't a single one that may be dropped first.
The way around this is to break one of the dependencies. A procedure for example that has dependencies to other objects can be altered to have an empty body, after which it does no longer depend on those other objects, so they may be dropped then. Dropping foreign keys is another way of eliminating dependencies between tables.
I don't know of any tool implementing such a partial delete of database objects, your use case is IMO far from common. You could however have a look at the FlameRobin source code which has a certain amount of dependency detection in the code that is used to create DDL scripts or modification statements for database objects. Armed with that information you could write your own tool to do it.
If it's a one time thing it may be enough to do this manually, though. Use any Firebird management tool of your choice for that.