Clone entire database with a SP - sql

I'm trying to find out if this is possible, but so far I haven't found out any good solutions. What I would like to achieve is write a stored procedure that can clone a database but without the stored data. That means all tables, views, constraints, keys and indexes should be included but without any data. Can it be done?

Sure - your stored proc would have to read the system catalog views to find out what objects are in the database, determine their potential dependencies, and then create a single or a collection of SQL scripts which re-create the database, and execute those.
It's possible - not very nice and easy to do. Especially the dependencies between objects might cause more headaches than first meets the eye....
You could also:
use something like SQL Server Management Studio (if you're on SQL Server - you didn't specify) and create the scripts manually, and just re-execute them on a separate server
use a "diff" tool like Redgate SQL Compare to compare two servers and have the second one brought up to date

I've successfully used the Microsoft SQL Server Database Publishing Wizard for this purpose. It's pretty straightforward, no coding needed. Here's a sample call:
sqlpubwiz script -d DatabaseName -S ServerName -schemaonly C:\Projects2\Junk\ DatabaseName.sql
I believe the default is to create both data and schema, but you can use the schemaonly parameter.
Download it here

In SQL Server you can roll through the system tables (sys.tables, sys.columns, etc.) and construct things one at a time. It's going to be very manual and error prone at the beginning, but it should become systematic pretty quickly.
Another way to do it is to write something in .Net using SMO. Check out this link:
http://www.sqlteam.com/article/scripting-database-objects-using-smo-updated

Related

What is the best way to design, generate, and version a database schema script for MS SQL Server?

I have never really seen any questions (with answers) as general as this, so I'm hoping to get some useful feedback. The reason I'm asking is because I've done all of this before and I have my own ways, but sometimes I feel it's not the best practice.
Let's take for example that I can't afford better db modeling tools and I only have sql server and ms sql server management studio. What I do is:
I design with mssms, all of the entities in my db (tables, primary keys, foreign keys, indexes, etc)
then I just generate the schema script using 'Generate Scripts...' command in mssms. The script that's generated is rather large (using sql server express 2012) and seems like it's not organized for maintenance very well.
Example: after all the table creation scripts are setup, there's a bunch of ALTER TABLE commands to add all the constraints. This kind of thing seems like it would be better in the table creation script section, maybe not. Also, for upgrade-ability, I normally add for each table creation section, 'IF NOT EXISTS', so that it doesn't throw an error when I need to re-run the sql script when the db is updated with new tables, columns, etc.
Then for versioning, I generally have a separate script that I run to add the schema version in a VERSION table in the db itself (with just one row).
This allows me to do incremental upgrades when I run the script by adding 'if new-version > current-version' sort of thing.
It seems to have worked out for me in the past, but it just seems kind of, I don't know, not very sophisticated. Can a sql expert shed some light on this subject? It's something we all do for every data driven web app we create, over and over again. I'd like to see how other developers do it.
To recap,
how do you go about designing your db model and generate scripts (do you do it with a design tool, write from scratch, etc?),
how to you manage incremental db changes over time?
How do you version control your database?
SQL Server Data Tools is ideal for this. It has all the design features you require and configurable scripting. It will also diff two databases and generate the change script for you. Oh - and it's free!

connecting to remote oracle database in SQL

I need to do some data migration between two oracle databases that in different servers. I've thought of some ways to do it like writing a jdbc program but i think the best way is to do it in SQL itself. I can also copy the entire table over to the database I am migrating to but these tables are big and doesnt seem like a "elegant" solution.
Is it possible to open a connection to one DB in SQL developer then connect to the other one using SQL and writing update/insert functions on tables as if they were both in the same connection?
I have read some examples on creating linked tables but none seem to be oracle specific or tell me how to open the external connection by supplying it the server hostname/port/SID/user credentials.
thanks for the help!
If you create a Database Link, you can just select a from different database by querying TABLENAME#dblink.
You can create such a link using the CREATE DATABASE LINK statement.
It depends if its a one time thing or a normal process and if you need to do ETL (Extract, Transform and Load) or not, but ill help you out based on what you explained.
From what i can gather from your explanation, what you attempt to accomplish is to copy a couple of tables from one db to another, if they can reach one another then its really simple, you could just create a DBLINK (http://www.dba-oracle.com/t_how_create_database_link.htm) and then do a SELECT AS INSERT from either side using the DBLINK for one of the tables and the local table as the receiver or sender. Its pretty straight forward.
But if its a one time thing i would just move the table with expdp and impdp since that will be a lot faster and a lot less strain on the DB.
If its something you need to maintain and keep updated, why not just add the DBLINK and use that on both sides, this will be dependent on network performance though.
If this is a bit out of you depth or you cant create dblinks due to restrictions, SQL Developer has had a database copy option for a while and you can go as far a copying individual tables, but its very heavy on the system where its being run (http://deepak-sharma.net/2014/01/12/copy-database-objects-between-two-databases-in-oracle-using-sql-developer/).

SQL stored procedure stored in svn?

Is there a way to backup/track changes to SQL stored procedures in SVN or any other method for tracking changes to SQL? I am using SQL 2008 and am not a DBA but am in charge of a small companies database.
TIA,
Brian Enderle
You might try Red Gate's SQL Source Control and SQL Compare to track changes.
We write procs and save them in Subversion as scripts. You check in each version of the script and then you can easily see previos versions or do a diff between them. If youwant to reduce unnecessary diffs from formatting, get a SQL formatting tool and have everyone format the same way before check in.
All SQL code should be handled this way not just procs. We store table strutures, views etc in Subversion. Of course with tables you havea create script and then alter table table scripts for each change in order, so that you don't wipe our tables with existing data by doing a drop and recreate. We also script out inserts to lookup tables to make them easier to port to other servers as well.
You can store it in svn, but it will have difficulty tracking exact changes if members of your team use different tools to write sql: postgresql seems to be particularly bad at formatting sql. You could consider using a free formatter: Eddie Awad lists some here:
http://awads.net/wp/2005/12/12/format-your-sql-the-easy-way/
Committing your code to source control depends on how you arrange your projects: your scripts could exist in a "misc" folder in an eclipse/visual studio project, or directly committed to svn via TortoiseSVN.
I have two suggestions for you (which I have both used myself):
you can use ScriptDB in order to extract the database schema to your file system and then commit it to svn. I actually set up a scheduled task which invokes ScriptDB every night and then commits the folder to svn (which only commits actually modified files) automatically.
If you are using VS2010 you can open a database project and synchronize it periodically with your database via the schema compare option from the data menu. After that you can commit your changes via tortoiseSVN or Ankh directly from VS.

Best practices for writing SQL scripts for deployment

I was wondering what are the best practices in order to write SQL scripts to set up databases for production and/or development, for instance:
Should I include the CREATE DATABASE statement?
Should I create users for the database in the same script?
Is correct to disable FK check before executing the body of the script?
May I include the hole script in a transaction?
Is better to generate 1 script per database than one script for all of them?
Thanks!
The problem with your question is is hard to answer as it depends on the way the scripts are used in what you are trying to achieve. you also don't say which DB server you are using as there are tools provided which can make some tasks easier.
Taking your points in order, here are some suggestions, which will probably be very different to everyone elses :)
Should I include the CREATE DATABASE
statement?
What alternative are you thinking of using? If your question is should you put the CREATE DATABASE statement in the same script as the table creation it depends. When developing DB I use a separate create DB script as I have a script to drop all objects and so I don't need to create the database again.
Should I create users for the database in the same script?
I wouldn't, simply because the users may well change but your schema has not. Might as well manage those changes in a smaller script.
Is correct to disable FK check before executing the body of the script?
If you are importing the data in an attempt to recover the database then you may well have to if you are using auto increment IDs and want to keep the same values. Also you may end up importing the tables "out of order" an not want checks performed.
May I include the whole script in a transaction?
Yes, you can, but again it depends on the type of script you are running. If you are importing data after rebuilding a db then the whole import should work or fail. However, your transaction file is going to be huge during the import.
Is better to generate 1 script per database than one script for all of them?
Again, for maintenance purposes it's probably better to keep them separate.
This probably depends what kind of database and how it is used and deployed. I am developing a n-tier standard application that is deployed at many different customer sites.
I do not add a CREATE DATABASE statement in the script. Creating the the database is a part of the installation script which allows the user to choose server, database name and collation
I have no knowledge about the users at my customers sites so I don't add create users statements also the only user that needs access to the database is the user executing the middle tire application.
I do not disable FK checks. I need them to protect the consistency of the database, even if it is I who wrote the body scripts. I use FK to capture my errors.
I do not include the entire script in one transaction. I require from the users to take a backup of the db before they run any db upgrade scripts. For creating of a new database there is nothing to protect so running in a transaction is unnecessary. For upgrades there are sometimes extensive changes to the db. A couple of years ago we switched from varchar to nvarchar in about 250 tables. Not something you would like to do in one transaction.
I would recommend you to generate one script per database and version control the scripts separately.
Direct answers, please ask if you need to expand on any point
* Should I include the CREATE DATABASE statement?
Normally I would include it since you are creating and owning the database.
* Should I create users for the database in the same script?
This is also a good idea, especially if your application uses specific users.
* Is correct to disable FK check before executing the body of the script?
If the script includes data population, then it helps to disable it so that the order is not too important, otherwise you can get into complex scripts to insert (without fk link), create fk record, update fk column.
* May I include the hole script in a transaction?
This is normally not a good idea. Especially if data population is included as the transaction can become quite unwieldy large. Since you are creating the database, just drop it and start again if something goes awry.
* Is better to generate 1 script per database than one script for all of them?
One per database is my recommendation so that they are isolated and easier to troubleshoot if the need arises.
For development purposes it's a good idea to create one script per database object (one script for each table, stored procedure, etc). If you check them into your source control system that way then developers can check out individual objects and you can easily keep track of versions and know what changed and when.
When you deploy you may want to combine the changes for each release into one single script. Tools like Red Gate SQL compare or Visual Studio Team System will help you do that.
Should I include the CREATE DATABASE statement?
Should I create users for the database in the same script?
That depends on your DBMS and your customer.
In an Oracle environment you will probably never be allowed to do such a thing (mainly because in the Oracle world a "database" is something completely different than e.g. in the PostgreSQL or MySQL world).
Sometimes the customer will have a DBA that won't let you create databases (or schemas or users - depending on the DBMS in use). So you will need to supply that information to the DBA in order for him/her to prepare the environment for your script.
May I include the hole script in a transaction?
That totally depends on the DBMS that you are using.
Some DBMS don't support transactional DDL and will implicitely commit any transaction when you execute a DDL statement, so you need to consider the order of your installation script.
For populating the tables with data I would definitely try to do that in a single transaction, but again this depends on your DBMS.
Some DBMS are faster if you commit only once or very seldomly (Oracle and PostgreSQL fall into this category) but will slow down if you commit more often.
Other DBMS handle smaller but more transactions better and will slow down if the transactions get too big (SQL Server and MySQL tend to fall into that direction)
The best practices will differ considerably on whether it is the first time set-up or a new version being pushed. For the first time set-up yes you need create database and create table scripts. For a new version, you need to script only the changes from the previous version, so no create database and no create table unless it is a new table. Now you need alter table statements becasue you don't want to lose the existing data. I do usually write stored procs, functions and views with a drop and create statment as dropping those pbjects doesn't generally affect the underlying data.
I find it best to create all database changes with scripts that are stored in source control under the version. So if a client is new, you run the create version 1.0 scripts, then apply all the other versions in order. If a client is just upgrading from version 1.2 to version 1.3, then you run just the scripts in version 1.3 source control repository. This would also include scripts to populate or add records to lookup tables.
For transactions you may want to break them up into several chunks not to leave a prod database locked in one transaction.
We also write reversal scripts to return to the old version if need be. This makes life easier if you have a part of a change that causes unanticipated problems on prod (usually performance issues).

How can I synchronize views and stored procedures between SQL Server databases?

I have a 'reference' SQL Server 2005 database that is used as our global standard. We're all set up for keeping general table schema and data properly synchronized, but don't yet have a good solution for other objects like views, stored procedures, and user-defined functions.
I'm aware of products like Redgate's SQL Compare, but we don't really want to rely on (any further) 3rd-party tools right now.
Is there a way to ensure that a given stored procedure or view on the reference database, for example, is up to date on the target databases? Can this be scripted?
Edit for clarification: when I say 'scripted', I mean running a script that pushes out any changes to the target servers. Not running the same CREATE/ALTER script multiple times on multiple servers.
Any advice/experience on how to approach this would be much appreciated.
1) Keep all your views, triggers, functions, stored procedures, table schemas etc in Source Control and use that as the master.
2) Failing that, use your reference DB as the master and script out views and stored procedures etc: Right click DB, Tasks->Generate Scripts and choose your objects.
3) You could even use transactional replication between Reference and Target DBs.
I strongly believe the best way is to have everything scripted and placed in Source Control.
You can use the system tables to do this.
For example,
select * from sys.syscomments
The "text" column will give you all of the code for the store procedures (plus other data).
It is well worth looking at all the system tables and procedures. In fact, I suspect this is what RedGate's software and other tools do under the hood.
I have just myself begun experimenting with this, so I can't really be specific about all the gotcha's and what other system tables you need to query, but this should get you started.
Also see:
Query to list SQL Server stored procedures along with lines of code for each procedure
which is slightly different question than yours, but related.
I use (and love) the RedGate tools, but when Microsoft announced Visual Studio 2010, they decided to allow MSDN subscribers who get Visual Studio 2008 Team System to also get Visual Studio 2008 Database Edition (which has a schema compare tool).
So if you or your organization has an MSDN subscription, you might want to consider downloading and installing the Database Edition over your Team System to get all the features now.
More details at http://msdn.microsoft.com/en-us/vsts2008/products/cc990295.aspx
Take a look at ScriptDB on Codeplex (http://www.codeplex.com/ScriptDB)
It is a console C# app that creates scripts of the SQL Database objects using SMO. You can use that to compare scripts generated on two servers. Since its open source, adjust it if you need it.
Timur