I would like know the best practise how to resolve this problem.
I have many sql scripts for procedures, function, etc. (Every object has stand alone script).
In the first lines I have this code:
USE MyDatabase
GO
This code is necessary for direct running.
I Also have init.bat file where I call every script.
In bat file are some properties and one of them is server name.
I can create simply all object to different servers.
But I would like create all objects on same server to another database.
I Do not want copy all files with different USE database code.
Is there any way how can I do it.
I thought about SYNONYM of database, but this is impossible.
Do you have any advice?
Thank you
David
Related
An assignment I have as part of my pl/sql studies requires me to create a remote database connection and copy down all my tables to it from local, and then also copy my other objects that reference data, so my views and triggers etc.
The idea is that at the remote end, the views etc should reference the local tables provided the local database is online, and if it is not, then they should reference the tables stored on the remote database.
So I've created a connection, and a script that creates the tables at the remote end.
I've also made a pl/sql block to create all the views and triggers at the remote end, whereby a simple select query is run against the local database to check if it is online, if it is online then a series of execute immediate statements creates the views etc with reference to table_name#local, and if it isn't online the block skips to the exception section, where a similar series of execute immediate statements creates the same views but referencing the remote tables.
OK so this is where I become unsure.
I have a package that contains a few procedures and a function, and I'm not sure what's the best way to create that at the remote end so that it behaves in a similar way in terms of where it picks up its reference tables from.
Is it simply a case of enclosing the whole package-creating block within an 'execute immediate', in the same way as I did for the views, or should I create two different packages and call them something like pack1 and pack1_remote?
Or is there as I suspect a more efficient method of achieving the goal?
cheers!
This is absolutely not how any reasonable person in the real world would design a system. Suggesting something like what I suggest here in the real world will, in the best case, get you laughed out of the room.
The least insane approach I could envision would be to have two different schemas. Schema 1 would own the tables. Schema 2 would own the code. At install time, create synonyms for every object that schema 2 needs to reference. If the remote database is available when the code is installed, create synonyms that refer to objects in the remote database. Otherwise, create synonyms that refer to objects in the local database. That lets you create a single set of objects without using dynamic SQL by creating an extra layer of indirection between your code and your tables.
I have quite old application with current database (on MSSQL but it does not matter). I scripted it completely with all required static data. Now I want to introduce DB change only via update scripts. So each function, each SP will be placed in stand-alone file and all schema update scripts will be stored in files named like 'SomeProduct01_0001' what means that this script belongs to product SomeProduct, sprint 1 and it is first schema update script.
I know that each script must be absolutely re-runnable, but anyway I want to have functionality to combine these scripts into one based on DB version (stored in DB table).
What common best practices there is to handle bunches of update scripts?
What is better - implement version anylyzis in collector
(bat or exe file) or add some SQL header to each file? From other point of view I am already have version - it will consist of sprint identifier and script identifier, not sure that it is ok to duplicate this information in script header.
How to skip file content if user tries to apply it to newer database but keep
availability combine this script with any other to perform updates
of other old database?
How to avoid database conflicts if combined scripts operates columns/table which still does not exists in database but will be created byt this script (for example, in line 10 table created and in line 60 it is used in trigger or constraint, as I know script will not be validated)? Maybe wrap in EXEC('') entire script? What I need to escape besides sigle quote characters?
UPD: As David Tanzer asnwered it is better to use ready solutions for DB migrations, so it may be best solution for cases like mine. It was not and answer exactly for my question, but it is suitable for new solutions.
You don't have to implement this yourself, there are tools that do it. Take a look at dbmaintain, it provides almost exactly the functionality you described:
http://www.dbmaintain.org/overview.html
I know of and have worked with several teams who use it to manage their database schemas in all their environments: Development, Testing, Staging and Production.
http://flywaydb.org/ seems to be another tool to do this, and it has even more features. They even have a comparison of multiple tools on their homepage (including dbmaintain)
In a previous job we had an extensive SQLServer database that constantly had new fields being added years after release. We stored each table schema in a seperate plain text file that contained a SQL create or alter statement ( I can't remember which and that's bothering me ). When the need came for a new column, we would simply modify the SQL in the plain text file before compiling all the files into one master .Sql script. When the script was run it would either create the table if it didn't exist or alter the existing one to preserve the changes. Thus preventing any dataloss or the need to do any sort of importing/exporting.
My issue is that it was all done before I was there and I didn't get a good chance to read over the utilities and understand them. I'd like to recreate something like this for my own personal us, but I'm not quite sure how they were done. There were utilities for other things like stored procedures and views, but those would just create a stub if it did not exist and then all you had to do was call Alter in the plain text file. I am not sure how I can even begin looking this up since it didn't seem to come up when looking around for "practices", "tips", or "patterns." Does anyone know of some resources for this or can shed some insight into getting these off the ground? Thanks!
If you google for "Continuous Database integration" you should find what you're looking for.
I would like to have your opinions regarding best practices to adopt in SQL scripting to install a Data Base.
PROBLEM A)
In my script I have several Batches to create Tables.
Tables have many Foreign Keys to each others, at the moment I must arranges batches in the right order to avoid conflict with FK Tables.
I would like to know if could be a good practice create Tables and all columns without FK first, and at the end of the script ALTER such tables adding FK.
PROBLEM B)
My script should be use to create different DB on different Servers.
Data Base could have different name on every installation.
Now in my script I create a Database using:
CREATE DATABASE NameX
and:
USE NameX
to use it.
Because I would need update manually the script for every installation. I was thinking would be great to have a CENTRALIZED way for naming the Data Base inside a the script.
In this way changing a simple variable would create the Data Base with my name and all USE statements.
I tried to use LOCAL VARIABLES, but without success because after GO statements they go out of scope.
I do not have any experience in using sqlcmd and variables there.
Any idea how to solve it inside my script?
PS: I use MS SQL 2008 and I will load my script in MS SMS
Thanks guys for your help, this community is great :-)
avoid using "USE DATABASE"
separate the database creating script and data object creating scripts
use some code (Setup, Deploy) to execute creating database script by replacing #database_name with real name
alternative:
use some replacement tool to prepare scripts before deploy (it just replace your ###database_name### with real name)
use bat file to prepare scripts
alternative
use Database Project in the Visual Studio. VS can generate some variables that setup projects can change in the deploy process..
Normally one starts with scripting all the tables, followed by the FK scripts, index scripts and the rest. This is normal practice, as you can't add relationships to tables that are not there...
As for your second problem - there is no way I am aware of for centralizing this. Your best option is a global search/replace of the database name on open files in SSMS.
I'm trying to find out if this is possible, but so far I haven't found out any good solutions. What I would like to achieve is write a stored procedure that can clone a database but without the stored data. That means all tables, views, constraints, keys and indexes should be included but without any data. Can it be done?
Sure - your stored proc would have to read the system catalog views to find out what objects are in the database, determine their potential dependencies, and then create a single or a collection of SQL scripts which re-create the database, and execute those.
It's possible - not very nice and easy to do. Especially the dependencies between objects might cause more headaches than first meets the eye....
You could also:
use something like SQL Server Management Studio (if you're on SQL Server - you didn't specify) and create the scripts manually, and just re-execute them on a separate server
use a "diff" tool like Redgate SQL Compare to compare two servers and have the second one brought up to date
I've successfully used the Microsoft SQL Server Database Publishing Wizard for this purpose. It's pretty straightforward, no coding needed. Here's a sample call:
sqlpubwiz script -d DatabaseName -S ServerName -schemaonly C:\Projects2\Junk\ DatabaseName.sql
I believe the default is to create both data and schema, but you can use the schemaonly parameter.
Download it here
In SQL Server you can roll through the system tables (sys.tables, sys.columns, etc.) and construct things one at a time. It's going to be very manual and error prone at the beginning, but it should become systematic pretty quickly.
Another way to do it is to write something in .Net using SMO. Check out this link:
http://www.sqlteam.com/article/scripting-database-objects-using-smo-updated