SQL tool for mirroring data from live database to test? - sql

Currently, we have a several environments, but mainly we have a live environment and a test environment as most do.
It seems our test environment we are always having issues with having good data in our test environment. Largely because our test sql environment is maintained by a bunch of loose scripts that are not maintained very well. Obviously, one course of action is to maintain these more efficiently, that is not going to happen we have found.
So, I am looking for a tool that would help with this. Has anyone found or used such a tool?
Any advice or direction would be greatly appreciated. This is really a thorn in our developer's sides.
*Update*Our main issue is that test structures often differ from live and we need an automated solution that handles this, i.e. when a table has more columns in test than in live. Updating question to reflect this thank you for your answer.

As far as Transfer of Data is concerned use SSMS import and export data wizard.If source and destination tables names and schema is same Data transfer is a breeze.I use it quite often never faced any problem (use identity insert option if there is any identity col).
and Script generation feature of SSMS is not that bad either. Normally i have to edit 'Create Database' and 'create Users' parts of the script manually.

Related

How to keep 2 Database Schemas consistent without effecting the data at all?

I have two server machines (One for development, other for Clients) with SQL Server 2008 installations. Whenever a developer makes changes to tables/views/stored procedures in the Development Server, it needs to reflect the Client Server as well.
Currently, I am manually handling all changes like new columns in Tables, changes in Stored procedures etc. Can DB scripts or replication automate the entire procedure for me? Or is there some better solution to keep database schemas consistent.
Help will be highly appreciated.
Thanks!
I highly recommend to create an environment where all schema changes are done exclusively through SQL scripts - never "manually" in any environment. Each developer has to commit the script related to his/her bugfixed (or new features) to a version control system.
Typically you'd have one big script that creates the database from scratch and one for each version upgrade (from 1.0 to 1.1, one from 1.1 to 1.2 and so on)
If you have the man power it is also very handy to maintain one "from-scratch" script for each version. Whether you need that or not depends on how often an installation on an empty system is done.
We have very good experience with using Liquibase to maintain all this. It automatically keeps track which patches have been applied to a database and which need to be run during an upgrade. It also prevents you to run the same migration twice.
A problem that all database applications have, and a difficult one to resolve. Such a solution cannot be scheduled, as the changes made by developers need to be tested first, and you certainly don't want untested code merged with your live database. This question is of interest to me because I'm currently writing a generic solution to resolve this issue once and for all.
But in the meantime, we're using an open-source product called Open DBDiff (Google it - you can't miss it), which could do with some polishing but works well enough. You pass it your source and target databases, and it generates a script to make the target the same as the source. It does seem to have some trouble copying assemblies and user roles, but for everything, I haven't had any trouble.
I believe a human should do the deployments, after making sure the changes have been tested and properly checked into the source control. This is not something to automate fully.
Human should use the tools though. I use Visual Studio 2010 Professional, which has a powerful schema comparison tool, generates and executes deployment scripts and has source control integration.

Managing database updates

I've been thinking of ways to improve managing changes to our database structure. I have a build server that creates nightly builds, so I was thinking we could somehow create database dumps, backups, and scripts from the test environment as part of the build process. Then when deploying an update to the client we could use a tool like DBDiff to create the database update script.
Is anybody doing something similar? Is it even a good idea? Maybe some good tips what to use to create these dumps on build server?
Rather than identifying the differences, I recommend to keep a proper script that creates a database from scratch.
We are quite satisfied with using Liquibase to manage all DB migration in our projects. It knows which "patches" have been applied and ensures that only those that are missing will be applied to the target database.
this is possible.
the differencing is the hard part. once you identify the differences, you need to construct the appropriate sql, then apply it. you can either apply it directly, or create some script that you can run after review.
when both sides change, then you need to decide if the target system should keep its change or if that should be completely removed.
remember that when the target system changes also include data, and if you remove some table or column, then your referential integrity might be completely ruined.
one more thought. you will need access to the target system in order to determine the diff. if this is a generic utility, you will need to make it an executable after the fact, not part of the build.
You will find the Visual Database Tools very useful here.
http://msdn.microsoft.com/en-us/library/y5a4ezk9.aspx
There is a schema compare built right into Visual Studio (it can also be run from the command line). There is also a database project that contains a complete set of scripts for the database and the objects that it contains. This can be checked into source control along with your source code.
You can deploy a new database based on these scripts with a context menu click.
Have a look at http://www.codeproject.com/KB/architecture/Database_CI.aspx and http://www.martinfowler.com/articles/evodb.html - there's a fair amount of thinking that's already available.
We are currently looking at the Juneau CTP release, SQL Tools for Visual Studio. It has a snapshot and schema comparison feature. Basically, it can auto-generate scripts between two schemas for you. If you use this against two versions of your database, it will give you an upgrade script.
http://msdn.microsoft.com/en-us/data/gg427686
Here at Red Gate we're close to releasing a solution which solves that precise issue using SQL Source Control and SQL Compare. We have an early access program which will allow you to try this out. Please visit the following link for sign-up details.
http://www.red-gate.com/MessageBoard/viewtopic.php?p=46951#46951

Tool for synchronising database changes from development database to production?

This may be a pipe dream, but I'm hoping someone knows of a tool which can be configured to compare all or some (keys) of the data in two identical database and merge, perhaps based on relationships.
Specifically looking for one for SQL Server.
I'm not really asking for the best one, but if it exists it would be nice to hear how it is used.
Any other ideas for how to manage the work done or data added in dev and push it out to production without copying the entire database are welcome.
Thanks!
We use this and personally think it's excellent.
http://www.red-gate.com/products/sql-development/sql-data-compare/
There is also another product for the schema side.
http://www.red-gate.com/products/sql-development/sql-compare/
I don't know of a specific tools but you can implement in your process of publications the analysis and executions of delta files, containing the diffs from one verision to another. Magento, Wordpress are using something like this for example. They have something like this
//sql_update_001_002.sql
UPDATE some_table...
DELETE some entries
CREATE a_new_table...
// compere some keys or do other logic.
//etc
Then they have a script that analyses the current version and if needed it executes the corresponding sql.
Navicat allows to make data and structure synchronization between 2 databases (also located on different servers).
In terms of tools I agree with Chris - Redgate's toolset for both schema and data comparisons
If you are also thinking about your overall db dev process - then I have written a blog on the topic which might be of interest.
It also has some links to how others have tackled this subject.
http://michaelbaylon.wordpress.com/category/data-management/database-development/sql-script-management/

Merging database structure changes from one server to another

I currently have a website with three builds: development, staging and production. Each of those has it's own MySQL database instance. Each of the instances has different data in it which should not change (orders).
My question is, if I made changes to the structure of the development database, is there an easy way to propogate those changes to staging and production without affecting the data?
Thanks.
Just do all of your schema changes with scripts which you keep in source control. When you release code to staging, then you will ship with the build the scripts to update the schema, and use the same ones when you ship to production.
It's that easy.
Don't EVER manually hack the database schema. Test your migration and rollback scripts (making rollback scripts is a VERY good idea).
I'm sure there is a better solution than the one I'll tell you over here... But until someone posts one... Here you go...
If you can script your database strcture (or you have them already scripted on your source control), you can compare the scripts side by side and then extract the differences to run them on the required database...
I'm sure there are tools out there that would do all that for you... But I can't recall any names and if they're free or not... I hope someone helps you more then :)

Scripting your database first versus building the database via SQL Server Management Studio and then generating the script

I had a (friendly but heated) argument with my lead developer the other day because our project has TSQL Scripts that I code directly into SQL files which I then run against the database. I find that when I do this, it's easy to work out the schema in advance without fiddly pointing and clicking and then there's no opportunity to forget to generate a script to put into source control as generating the script no longer becomes a chore you have to do after the fact, but is an implicit part of the process (and also leads to cleaner scripts without the extra crap that SQL Server Management Studio inserts into the scripts it generates).
My lead developer insists that having to manually script it out is a pain in the arse and that he absolutely refuses to write his scripts by hand when there are perfectly good tools to do it without coding. I've noticed that the copying of his changes into the actual scripts tends to get delayed a bit as a result though.
What are your thoughts on the pros and/or cons of doing it one way vs the other? Am I being too rigid/old-school in my sticking to hand coding schema scripts or is he being too reliant on third party tools and losing something in the process?
I always script stuff myself because the wizards sometimes don't script things in a way that I like it and will also give funky names to defaults
scripting things yourself is also good in case you get laid off and you have to go for an interview where they ask you to script DDL on the whiteboard
As I usually collaborate with a colleague during the schema design, I tend to design the schema using the GUI tools, as its easier to discuss it with a diagram of the tables in front of you. I then generate the scripts, being careful to select the exact options that I want to avoid having to make manual changes post-export.
I think a decision on the relative merits of the two approaches might take into account factors such as
the frequency of changes to the schema
the frequency with which changes need to be propagated to other schemas (test, user acceptance, production, clients * n, etc)
the degree to which the schema may vary across development branches
how well-known in advance your various changes can be scheduled
whether or not you can generate SQL "diff" scripts between schemas.
On balance, I tend to prefer to work with a script for each change (or "migration"). It lets me resequence change releases as priorities shift.
Just because you can create tables in a graphical tool doesn't necessarily mean you should.
I find its as quick to write a script as it is to use SQLMS. You still have to type names in SQLMS, and the time spent moving from keyboard and mouse could be used writing the proper script anyway.
The two of you are almost working with two sets of code. Consistency seems to be a key factor on these types of decisions. In your case, if you create a script, your boss uses the gui to add a field, how do you stay in sync? You can't use your script to rebuild the table without editing it (Chance for error.).
Maybe he should pull rank and force you to format your scripts the same way the GUI creates them - just kidding.
I think you should flip on it..........