In a previous job we had an extensive SQLServer database that constantly had new fields being added years after release. We stored each table schema in a seperate plain text file that contained a SQL create or alter statement ( I can't remember which and that's bothering me ). When the need came for a new column, we would simply modify the SQL in the plain text file before compiling all the files into one master .Sql script. When the script was run it would either create the table if it didn't exist or alter the existing one to preserve the changes. Thus preventing any dataloss or the need to do any sort of importing/exporting.
My issue is that it was all done before I was there and I didn't get a good chance to read over the utilities and understand them. I'd like to recreate something like this for my own personal us, but I'm not quite sure how they were done. There were utilities for other things like stored procedures and views, but those would just create a stub if it did not exist and then all you had to do was call Alter in the plain text file. I am not sure how I can even begin looking this up since it didn't seem to come up when looking around for "practices", "tips", or "patterns." Does anyone know of some resources for this or can shed some insight into getting these off the ground? Thanks!
If you google for "Continuous Database integration" you should find what you're looking for.
Related
I would like know the best practise how to resolve this problem.
I have many sql scripts for procedures, function, etc. (Every object has stand alone script).
In the first lines I have this code:
USE MyDatabase
GO
This code is necessary for direct running.
I Also have init.bat file where I call every script.
In bat file are some properties and one of them is server name.
I can create simply all object to different servers.
But I would like create all objects on same server to another database.
I Do not want copy all files with different USE database code.
Is there any way how can I do it.
I thought about SYNONYM of database, but this is impossible.
Do you have any advice?
Thank you
David
I'm working on a AS400 database and I need to manipulate library/collection with sql.
I need to recreate something similar to the CLRLIB command but I don't find a good way to do this.
Is there a way to delete all the table from a library with a sql query ?
Maybe I can drop the collection and create a new one with the same name. But I don't know if this is a good way to clear the library.
RESOLVE :
Thanks to Buck Calabro for his solution.
I use the following query to call the CLRLIB in SQL :
CALL QSYS.QCMDEXC('CLRLIB LIB_NAME ASPDEV(ASP_NAME)', 0000000032.00000)
Where LIB_NAME is the name of the library I want to clear, ASP_NAME is the name of the ASP where the library is and 0000000032.00000 is the command lenght.
(note that the term COLLECTION has been deprecated, SCHEMA is the current term)
Since a library can contain both SQL and non-SQL objects, there's no SQL way to delete every possible object type.
Dropping the schema and recreating it might work. But note that if the library is in a job's library list, it will have a lock on it and you will not be able to drop it. Also, unless the library was originally created via CREATE SCHEMA (or CREATE COLLECTION) you're going to end up with differences.
CRTLIB creates an empty library, CREATE SCHEMA creates a library plus objects needed for automatic journaling and a dozen or so SQL system views.
Read Charles' answer - there may be objects in your schema that you want to keep (data areas, programs, display and printer files, etc.) If the problem is to delete all of the tables so you can re-build all of the tables, then look at the various system catalog tables: SYSTABLES, SYSVIEWS, SYSINDEXES, etc. The system catalog 'knows' about all of the SQL tables, indexes, views, stored procedures, triggers and so on. You could read the catalog and issue the appropriate SQL DROP statements.
I have quite old application with current database (on MSSQL but it does not matter). I scripted it completely with all required static data. Now I want to introduce DB change only via update scripts. So each function, each SP will be placed in stand-alone file and all schema update scripts will be stored in files named like 'SomeProduct01_0001' what means that this script belongs to product SomeProduct, sprint 1 and it is first schema update script.
I know that each script must be absolutely re-runnable, but anyway I want to have functionality to combine these scripts into one based on DB version (stored in DB table).
What common best practices there is to handle bunches of update scripts?
What is better - implement version anylyzis in collector
(bat or exe file) or add some SQL header to each file? From other point of view I am already have version - it will consist of sprint identifier and script identifier, not sure that it is ok to duplicate this information in script header.
How to skip file content if user tries to apply it to newer database but keep
availability combine this script with any other to perform updates
of other old database?
How to avoid database conflicts if combined scripts operates columns/table which still does not exists in database but will be created byt this script (for example, in line 10 table created and in line 60 it is used in trigger or constraint, as I know script will not be validated)? Maybe wrap in EXEC('') entire script? What I need to escape besides sigle quote characters?
UPD: As David Tanzer asnwered it is better to use ready solutions for DB migrations, so it may be best solution for cases like mine. It was not and answer exactly for my question, but it is suitable for new solutions.
You don't have to implement this yourself, there are tools that do it. Take a look at dbmaintain, it provides almost exactly the functionality you described:
http://www.dbmaintain.org/overview.html
I know of and have worked with several teams who use it to manage their database schemas in all their environments: Development, Testing, Staging and Production.
http://flywaydb.org/ seems to be another tool to do this, and it has even more features. They even have a comparison of multiple tools on their homepage (including dbmaintain)
I requested that a client send me a copy of their current MS SQL database. Instead of being given a database backup, or small set of scripts I could use to recreate the database, I was provided with hundreds upon hundreds of individual SQL scripts, and no instructions on the order in which they'd need to be run.
The scripts cannot simply be executed in one batch operation, as there are foreign key dependencies between tables. It appears as though they've limited these scripts to creating a single table or stored procedure per script.
Normally, I'd simply ask a client to provide the information in a more usable format, but they're not known for getting back to us in a timely manner, and our project timeline is already in jeopardy due to delays on their end.
Are there any tools I can use to recreate the database from this enormous set of scripts?
This may sound a bit arcane, but you can do the following, iteratively:
Put all the scripts into a list of "scripts to be run"
Run all the scripts in the "to be run" scripts
Remove the successful runs
Repeate 2-3 until no scripts are left
The scripts with no dependencies will finish in the first round. The ones that depend on them in the next round, and then so on and so on.
I would suggest that you operate all this from a metascript, that uses a database table to store the names of the available scripts.
Good luck.
If you set your folder of scripts as a data source in Red Gate SQL Compare, and specify a blank database as the target, it should allow you to compare and deploy to the target database. This is because the tool is able to read all SQL creation scripts recursively from the folder you specify. This is available as a fully functional 14-day trial, so you can easily test it in your scenario.
http://www.red-gate.com/products/sql-development/sql-compare/
The quickest (and by far the dirtiest) way of (maybe) doing this is to concatenate all of the scripts together, ensuring that you have a GO statement in between each one. Make sure there are no DROP statements in your scripts, or this technique won't work.
Run the concatenated script repeatedly for... I don't know, 10 or so iterations. Chances are you will have their database recreated properly in your test system.
If you're feeling more rigorous, go with Gordon's suggestion. I'm not really aware of a tool which will be able to reconstruct the dependencies, but you may want to take a look at Red-Gate's SQL Compare, which you can get a demo of for free, and can do some pretty magical things.
You can remove all the foreign keys constraints. Then, organize the scripts so that it first creates all the tables, then add back all the foreign keys. Finally create indexes.
Building on Gordon.
Split them up into one table each.
Count the number of FK and sort starting with least first.
Then remove the scripts that run as Gordon suggests.
Another potential problem is that is creates the table and fails on the FK and leaves the table.
You come back later to create the take and the table is already there so it fails.
If you parse them out with Table FKs
Start with Tables with no FK in a list
Then loop thru Tables with FK
Only add to the List if all the FK are already in the List.
If you know .NET then a class with string property table, sting property script, and a property List String of FK property names.
They should parse out pretty clean regex.
I would like to have your opinions regarding best practices to adopt in SQL scripting to install a Data Base.
PROBLEM A)
In my script I have several Batches to create Tables.
Tables have many Foreign Keys to each others, at the moment I must arranges batches in the right order to avoid conflict with FK Tables.
I would like to know if could be a good practice create Tables and all columns without FK first, and at the end of the script ALTER such tables adding FK.
PROBLEM B)
My script should be use to create different DB on different Servers.
Data Base could have different name on every installation.
Now in my script I create a Database using:
CREATE DATABASE NameX
and:
USE NameX
to use it.
Because I would need update manually the script for every installation. I was thinking would be great to have a CENTRALIZED way for naming the Data Base inside a the script.
In this way changing a simple variable would create the Data Base with my name and all USE statements.
I tried to use LOCAL VARIABLES, but without success because after GO statements they go out of scope.
I do not have any experience in using sqlcmd and variables there.
Any idea how to solve it inside my script?
PS: I use MS SQL 2008 and I will load my script in MS SMS
Thanks guys for your help, this community is great :-)
avoid using "USE DATABASE"
separate the database creating script and data object creating scripts
use some code (Setup, Deploy) to execute creating database script by replacing #database_name with real name
alternative:
use some replacement tool to prepare scripts before deploy (it just replace your ###database_name### with real name)
use bat file to prepare scripts
alternative
use Database Project in the Visual Studio. VS can generate some variables that setup projects can change in the deploy process..
Normally one starts with scripting all the tables, followed by the FK scripts, index scripts and the rest. This is normal practice, as you can't add relationships to tables that are not there...
As for your second problem - there is no way I am aware of for centralizing this. Your best option is a global search/replace of the database name on open files in SSMS.