How to drop all triggers in a Firebird 1.5 database - sql

For debug purposes I need to send 1 table of an existing Firebird 1.5 database to someone.
In stead of sending the whole db , I want to send just the db with just this table - no triggers, no constraints. I can't copy the data to another db because it's just that that we want to check - why this one table is given troubles.
I am just wondering if there is a way to drop all triggers , all constraints and all but one table (using some clever trick with the system tables or so ) ?

Using GUI tool (I personally prefer IBExpert) execute following command:
select 'DROP TRIGGER ' || rdb$trigger_name || ';' from rdb$triggers
where (rdb$system_flag = 0 or rdb$system_flag is null)
Copy result into clipboard then paste and execute within script executive
window.

If your database backup can switch to Firebird 2.1 there is some switch in gbak and isql.
Some Firebird command-line tools have
been supplied with new switches to
suppress the automatic firing of
database triggers:
gbak -nodbtriggers
isql -nodbtriggers
nbackup -T
These switches can only be used by the
database owner and SYSDBA.

You can drop all triggers by directly deleting them from the system table, like so:
delete from rdb$triggers
where (rdb$system_flag = 0 or rdb$system_flag is null);
Note that the normal way of using drop trigger is certainly preferable, but it can be done.
You can also drop constraints by executing DDL statements, but to enumerate constraints and drop them in a SQL script you would need the execute block functionality that Firebird 1.5 doesn't have.
There are similar statements to delete other database objects, but actually running these successfully may be much more difficult because of dependencies between objects. You can't drop any object as long as another object depends on it. This can become really tricky due to circular references, where two (or even more) objects depend on one another, forming a cycle, so there isn't a single one that may be dropped first.
The way around this is to break one of the dependencies. A procedure for example that has dependencies to other objects can be altered to have an empty body, after which it does no longer depend on those other objects, so they may be dropped then. Dropping foreign keys is another way of eliminating dependencies between tables.
I don't know of any tool implementing such a partial delete of database objects, your use case is IMO far from common. You could however have a look at the FlameRobin source code which has a certain amount of dependency detection in the code that is used to create DDL scripts or modification statements for database objects. Armed with that information you could write your own tool to do it.
If it's a one time thing it may be enough to do this manually, though. Use any Firebird management tool of your choice for that.

Related

How do you save a CREATE VIEW statement?

EDIT: This question was based on the incorrect premise that SQL VIEWS were cleared from a database when the user that created them disconnects from the server. Leaving this question in existence in case others have that assumption.
I'm trying to use views in my database, but I'm running up against an inability to save the code as a SQL Server object for repeated use.
I tried saving CREATE VIEW statements as procedures and user defined functions, but as many have answered on stack overflow, CREATE PROCEDURE and CREATE FUNCTION are incompatible with CREATE VIEW due to the only one in batch issue.
Obviously I don't want to retype my CREATE VIEW statements every time, and I'd prefer not to have to load them from text files. I must be missing something here.
You don't really "save" CREATE/ALTER statements. The create or alter statement changes the structure of the database. You can use SSMS to generate the statement again later by right clicking on the view, and choosing Script as->Create. This inspects the structure of the database and generates the statement.
The problem with this approach is your database now consists of both a structure definition(DDL) as well as its contents, the data. If you dropped/created the database to clear its data, you'd also have lost the structure. So you always need a database hanging around for the structure and back it up to ensure you don't ever lose the DDL.
Personally I would use Database Projects as part of Visual Studio and SQL Server Data Tools. This allows you to keep each View, Table, etc. as separate files, and then update the database using schema compare. The main benefit being you can separate the definition of the database from the database itself, and also source control or backup the DDL files.
If you really want to, you could create a view in a proc like this:
CREATE PROCEDURE uspCreateView AS
EXEC('CREATE VIEW... ')
Though, you'll have to escape single quotes in your view code with ''
However, I have to agree with the other comments that this seems like a strange thing to do.
Some other thoughts:
You can use sp_helptext to get the code of an existing view:
sp_helptext '<your view name here>'
Also, INFORMATION_SCHEMA.VIEWS includes a VIEW_DEFINITION column with the same code:
SELECT * FROM INFORMATION_SCHEMA.VIEWS

SQL Create or Replace Table in Oracle

We have a oracle database and we have been running into problems with our build and install procedures where when we update the table schema (add, modify columns, triggers, etc) it doesn't always get deployed to all the instances.
Right now we handle schema updates by putting notes on the install steps for the build to run alter table commands, etc. But these always assume you are going from the last build (i.e. build 3 is installed and we are going to 4). If 1 is installed, there might be alter scripts going from 1 to 2, then 2 to 3, then 3 to 4. So this is a giant pain of a manual process that we often mess up and miss an altar.
Is there a easy way to do a "create or replace" on a table without dropping it and losing data? Essentially we want to compare the current table to what it should be and update it. We do not want to backup the table, drop it, create it, and then restore it.
"Essentially we want to compare the current table to what it should be and update it"
Assuming you have a good source version that you want to use to update the other instances, you can Toad's schema compare (you need the DBA Admin module or Toad Xpert Edition) and generate the scripts needed to update a single table, a set of tables, or whatever list of objects you choose.
I would say that the scripts should still be checked/verified before running against the target instance. Some changes may be best handled in a different way (rename a column vs drop/create for example). So be careful.
One more note that others will probably bring up is that this problem shows definite holes in your company's change management process (which is a much bigger topic than this question).

Keep table downtime to a minimum by renaming old table, then filling a new version?

I have a handful or so of permanent tables that need to be re-built on a nightly basis.
In order to keep these tables "live" for as long as possible, and also to offer the possibility of having a backup of just the previous day's data, another developer vaguely suggested
taking a route similar to this when the nightly build happens:
create a permanent table (a build version; e.g., tbl_build_Client)
re-name the live table (tbl_Client gets re-named to tbl_Client_old)
rename the build version to become the live version (tbl_build_Client gets re-named to tbl_Client)
To rename the tables, sp_rename would be in use.
http://msdn.microsoft.com/en-us/library/ms188351.aspx
Do you see any more efficient ways to go about this,
or any serious pitfalls in the approach? Thanks in advance.
Update
Trying to flush out gbn's answer and recommendation to use synonyms,
would this be a rational approach, or am I getting some part horribly wrong?
Three real tables for "Client":
1. dbo.build_Client
2. dbo.hold_Client
3. dbo.prev_Client
Because "Client" is how other procs reference the "Client" data, the default synonym is
CREATE SYNONYM Client
FOR dbo.hold_Client
Then take these steps to refresh data yet keep un-interrupted access.
(1.a.) TRUNCATE dbo.prev_Client (it had yesterday's data)
(1.b.) INSERT INTO dbo.prev_Client the records from dbo.build_Client, as dbo.build_Client still had yesterday's data
(2.a.) TRUNCATE dbo.build_Client
(2.b.) INSERT INTO dbo.build_Client the new data build from the new data build process
(2.c.) change the synonym
DROP SYNONYM Client
CREATE SYNONYM Client
FOR dbo.build_Client
(3.a.) TRUNCATE dbo.hold_Client
(3.b.) INSERT INTO dbo.hold_Client the records from dbo.build_Client
(3.c.) change the synonym
DROP SYNONYM Client
CREATE SYNONYM Client
FOR dbo.hold_Client
Use indirection to avoid manuipulating tables directly:
Have 3 tables: Client1, Client2, Client3 with all indexes, constraints and triggers etc
Use synonyms to hide the real table eg Client, ClientOld, ClientToLoad
To generate the new table, you truncate/write to "ClientToLoad"
Then you DROP and CREATE the synonyms in a transaction so that
Client -> what was ClientToLoad
ClientOld -> what was Client
ClientToLoad -> what was ClientOld
You can use SELECT base_object_name FROM sys.synonyms WHERE name = 'Client' to work out what the current indirection is
This works on all editions of SQL Server: the other way is "partition switching" which requires enterprise edition
Some things to keep in mind:
Replication - if you use replication, I don't believe you'll be able to easily implement this strategy
Indexes - make sure that any indexes you have on the tables are carried over to your new/old tables as needed
Logging - i don't remember whether or not sp_rename is fully logged, so you may want to test that in case you need to be able to rollback, etc.
Those are the possible drawbacks I can think of off the top of my head. It otherwise seems to be an effective way to handle the situation.
Except of missing step 0. Drop tbl_Client_old if exists solutions seems fine especially if you run it in explicit transaction. There is no backup of any previous data however.
The other solution, without renames and drops, and which I personally would prefer is to:
Copy all rows from tbl_Client to tbl_Client_old;
Truncate tbl_Client.
(Optional) Remove obsolete records from tbl_Client_old.
It's better in a way that you can control how much of the old data you can store in tbl_Client_old. Which solution will be faster depends on how much data is stored in tables and what indices in tables are.
if you use SQL Server 2008, why can't you try to use horisontal partitioning? All data contains in one table, but new and old data contains in separate partitions.

Need to alter column types in production database (SQL Server 2005)

I need help writing a TSQL script to modify two columns' data type.
We are changing two columns:
uniqueidentifier -> varchar(36) * * * has a primary key constraint
xml -> nvarchar(4000)
My main concern is production deployment of the script...
The table is actively used by a public website that gets thousands of hits per hour. Consequently, we need the script to run quickly, without affecting service on the front end. Also, we need to be able to automatically rollback the transaction if an error occurs.
Fortunately, the table only contains about 25 rows, so I am guessing the update will be quick.
This database is SQL Server 2005.
(FYI - the type changes are required because of a 3rd-party tool which is not compatible with SQL Server's xml and uniqueidentifier types. We've already tested the change in dev and there are no functional issues with the change.)
As David said, execute a script in a production database without doing a backup or stop the site is not the best idea, that said, if you want to do changes in only one table with a reduced number of rows you can prepare a script to :
Begin transaction
create a new table with the final
structure you want.
Copy the data from the original table
to the new table
Rename the old table to, for example,
original_name_old
Rename the new table to
original_table_name
End transaction
This will end with a table that is named as the original one but with the new structure you want, and in addition you maintain the original table with a backup name, so if you want to rollback the change you can create a script to do a simple drop of the new table and rename of the original one.
If the table has foreign keys the script will be a little more complicated, but is still possible without much work.
Consequently, we need the script to
run quickly, without affecting service
on the front end.
This is just an opinion, but it's based on experience: That's a bad idea. It's better to have a short, (pre-announced if possible) scheduled downtime than to take the risk.
The only exception is if you really don't care if the data in these tables gets corrupted, and you can be down for an extended period.
In this situation, based on th types of changes you're making and the testing you've already performed, it sounds like the risk is very minimal, since you've tested the changes and you SHOULD be able to do it safely, but nothing is guaranteed.
First, you need to have a fall-back plan in case something goes wrong. The short version of a MINIMAL reasonable plan would include:
Shut down the website
Make a backup of the database
Run your script
test the DB for integrity
bring the website back online
It would be very unwise to attempt to make such an update while the website is live. you run the risk of being down for an extended period if something goes wrong.
A GOOD plan would also have you testing this against a copy of the database and a copy of the website (a test/staging environment) first and then taking the steps outlined above for the live server update. You have already done this. Kudos to you!
There are even better methods for making such an update, but the trade-off of down time for safety is a no-brainer in most cases.
And if you absolutely need to do this in live then you might consider this:
1) Build an offline version of the table with the new datatypes and copied data.
2) Build all the required keys and indexes on the offline tables.
3) swap the tables out in a transaction. 00 you could rename the old table to something else as an emergency backup.
sp_help 'sp_rename'
But TEST FIRST all of this in a prod like environment. And make sure your backups are up to date. AND do this when you are least busy.

Can I disable identifier checking in SQL Server 2005?

I have an assortment of database objects (tables, functions, views, stored procedures) each scripted into its own file (constraints are in the same file as the table they alter) that I'd like to be able execute in an arbitrary order. Is this possible in SQL Server 2005?
Some objects as an example:
Table A (references Table B)
Table B (references Function A)
Function A (references View A)
View A (references Table C)
Must be run in the following order:
Table C
View A
Function A
Table B
Table A
If the scripts are run out of order, errors about the missing objects are thrown.
The reason I ask is that in a project I'm working on we maintain each database object in its own file (for source control purposes), and then maintain a master script that creates each database object in the correct order. This requires the master script to be manually edited any time an object is added to the schema. I'd like to be able to just execute each script as it is found in the file system.
In my experience the most problematic issue is with views, which can reference recursively. I once wrote a utility to iterate through the scripts until the errors were all resolved. Which only works when you're loading everything. Order was important - I think I did UDTs, tables, FKs, views (iteratively), SPs and UDFs (iteratively until we decided that SPs calling SPs was a bad idea, and UDFs are generally a bad idea.)
If you script the foreign keys into separate files, you can get rid of table-table dependencies, if you run the FK script after creating all tables.
As far as I'm aware, functions and procedures check for object existence only in JOIN clauses.
The only difficulty I found was views depending on views, as a view definition requires that the objects the view depends on do exist.
I found this page where the author has written a nice procedure for doing exactly what you are talking about. Sounds like you just need to have two versions of it, one for disabling the constraints and another to re-enable then.
APEX SQL Script is supposed to analyze the dependencies and order the script appropriately, but even then I've had problems.