I'm working on automated tests for particular web app. It uses database to persist data (SQL Server in this case).
In our automated tests we perform several database changes (inserts, updates). And after tests have been executed we want to restore database to original state.
Steps will be something like this:
Create somehow backup
Execute tests
Restore data from backup
The first version was pretty simple - create table backup and then restore it. But we've encountered an issue with references integrity.
After that we decided to use full database backup, but I don't like this idea.
Also we were thinking that we can track all references and backup only needed tables not a whole database.
Last thoughts was about somehow logging our actions (inserts, updates) and then perform reverse actions (deletes for inserts, updates with old data for updates), but it looks kinda complicated.
May be there is another solution?
Actually, there is no need to restore the database in native SQL Server terms, nor to track the changes and then revert them back
You can use ApexSQL Restore – a SQL Server tool that attaches both native and natively compressed SQL database backups and transaction log backups as live databases, accessible via SQL Server Management Studio, Visual Studio or any other third-party tool. It allows attaching single or multiple full, differential and transaction log backups
For more information on how to use the tool in your scenario check the Using SQL database backups instead of live databases in a large development team online article
Disclaimer: I work as a Product Support Engineer at ApexSQL
If you wanting to do minimal changes to the database with Insert and Update, it is much better alternative to do those changes within transactions which can be rolled back at the end of the test. This way SQL server will automatically store information in regards what you changes and will reverse it back to state before test began.
BEGIN TRANSACTION http://technet.microsoft.com/en-us/library/ms188929.aspx
I think the better idea is to create test database.
You can also create Interface for methods, first implementation for real data (with real db) and the second one for test db.
You can create a database snapshot.
The snapshot will keep track of all changed data pages that are changed during your test. Once you are done, you can restore back form the snapshot to the previous state.
CREATE DATABASE [test_snaphot1] ON
( NAME = test, FILENAME =
'e:\SQLServer\Data\test_snapshot1.ss' )
AS SNAPSHOT OF [test];
GO
--do all your tests
RESTORE DATABASE [test] from
DATABASE_SNAPSHOT = 'test_snaphot1';
GO
You have to create a snapshot file for each datafile of your database. So if you have a database with 4 data files, then your snapshot syntax should include 4 snapshot files.
Found simple solution - renaming table.
Algorithm is pretty simple:
Rename table to another table, e.g. "table" to "table-backup" (references will refer to that backup table)
Create "table" from "table-backup" ("table" will not have any dependencies)
Perform any actions in the application
Drop "table" with dirty data (will not break references integrity)
Rename "table-backup" to "table" (references integrity will be kept).
Thanks
Related
I'm working on a project as an outsourcing developer where i don't have access to testing and production servers only the development environment.
To deploy changes i have to create sql scripts containing the changes to make on each server for the feature i wish to deploy.
Examples:
When i make each change on the database, i save the script to a folder, but sometimes this is not enought because i sent a script to alter a view, but forgot to include new tables that i created in another feature.
Another situation would be changing a table via SSMS GUI and forgot to create a script with the changed or new columns and later have to send a script to update the table in testing.
Since some features can be sent for testing and others straight to production (example: queries to feed excel files) its hard to keep track of what i have to send to each environment.
Since the deployment team just executes the scripts i sent them to update the database, how can i manage/ keep track of changes to sql server database without a compare tool ?
[Edit]
The current tools that i use are SSMS, VS 2008 Professional and TFS 2008.
I can tell you how we at xSQL Software do this using our tools:
deployment team has an automated process that takes a schema snapshot of the staging and production databases and dumps the snapshots nightly on a share that the development team has access to.
every morning the developers have up to date schema snapshots of the production and staging databases available. They use our Schema Compare tool to compare the dev database with the staging/production snapshot and generate the change scripts.
Note: to take the schema snapshot you can either use the Schema Compare tool or our Schema Compare SDK.
I'd say you can have a structural copy of test and production servers as additional development databases and keep in mind to always apply change when you send something.
On these databases you can establish triggers that will capture all DDL events and put them into table with getdate() attached. With that you should be able to handle changes pretty easily and some simple compare will also be easier to apply.
Look into Liquibase specially at the SQL format and see if that gives you what you want. I use it for our database and it's great.
You can store all your objects in separate scripts, but when you do a Liquibase "build" it will generate one SQL script with all your changes in it. The really important part is getting your Liquibase configuration to put the objects in the correct dependency order. That is tables get created before foreign key constraints for one example.
http://www.liquibase.org/
Is there a way to backup certain tables in a SQL Database? I know I can move certain tables into different filegroups and preform a backup on these filegroup. The only issue with this is I believe you need a backup of all the filegroups and transaction logs to restore the database on a different server.
The reason why I need to restore the backup on a different server is these are backups of customers database. For example we may have a remote customer and need to get a copy of they 4GB database. 90% of this space is taken up by two tables, we don’t need these tables as they only store images. Currently we have to take a copy of the database and upload it to a FTP site…With larger databases this can take a lot of the time and we need to reduce the database size.
The other way I can think of doing this would be to take a full backup of the DB and restore it on the clients SQL server. Then connect to the new temp DB and drop the two tables. Once this is done we could take a backup of the DB. The only issue with this solution is that it could use a lot of system restores at the time of running the query so its less than ideal.
So my idea was to use two filegroups. The primary filegroup would host all of the tables except the two tables which would be in the second filegroup. Then when we need a copy of the database we just take a backup of the primary filegroup.
I have done some testing but have been unable to get it working. Any suggestions? Thanks
Basically your approach using 2 filegroups seems reasonable.
I suppose you're working with SQL Server on both ends, but you should clarify for each which whether that is truly the case as well as which editions (enterprise, standard, express etc.), and which releases 2000, 2005, 2008, (2012 ? ).
Table backup in SQL Server is here a dead horse that still gets a good whippin' now and again. Basically, that's not a feature in the built-in backup feature-set. As you rightly point out, the partial backup feature can be used as a workaround. Also, if you just want to transfer a snapshot from a subset of tables to another server, using ftp you might try working with the bcp utility as suggested by one of the answers in the above linked post, or the export/import data wizards. To round out the list of table backup solutions and workarounds for SQL Server, there is this (and possibly other ? ) third party software that claims to allow individual recovery of table objects, but unfortunately doesn't seem to offer individual object backup, "Object Level Recovery Native" by Red Gate". (I have no affiliation or experience using this particular tool).
As per your more specific concern about restore from partial database backups :
I believe you need a backup of all the filegroups and transaction logs
to restore the database on a different server
1) You might have some difficulties your first time trying to get it to work, but you can perform restores from partial backups as far back as SQL Server 2000, (as a reference see here
2) From 2005 and onward you have the option of partially restoring today, and if you need to you can later restore the remainder of your database. You don't need to include all filegroups-you always include the primary filegroup and if your database is simple recovery mode you need to add all read-write filegroups.
3) You need to apply log backups only if your db is in bulk or full recovery mode and you are restoring changes to a readonly filegroup that since last restore has become read-write. Since you are expecting changes to these tables you will likely not be concerned about read only filegroups, and so not concerned about shipping and applying log backups
You might also investigate some time whether any of the other SQL Server features, merge replication, or those mentioned above (bcp, import/export wizards) might provide a solution that is more simple or more adequately meets your needs.
I'm trying to head this one off at the pass. I've got two database servers (DEV and PRD) and I have my database on the DEV server. I am looking to deploy v1 of my application to PRD server.
The question is this: Say in two months, I am ready to ship v1.1 of my application, which introduces two new VIEWS, six new fields (three fields in two tables, each), and an updated version of my sproc that creates records in the tables with new fields. My DEV database has the new schema, but my PRD database has the real data, so I can't simply copy the .mdf file, since I want to keep my PRD data, but include my new schema.
I understand doing the initial creation of tables, views, sprocs via saved .sql files; but what I'm wondering is, is it possible to use SSMS to create the appripriate "alter table" scripts or do I need to manually do this?
I have handled this with a release update SQL script that applies the changes to the previous version.
You either need to code this yourself or use one of the many DBA tools to do database compares and generate a diff script.
There are tools that will do this for you SQL Compare is one of them and one I like the best
Otherwise you have to code these yourself and don't forget to also script the permissions if you recreate the proc (unless you use ALTER PROC in that case permissions are preserved)
Since your database changes should be in scripts that are under source control, you just load them with the version that you are moving to prod just like any other code associated with that version. One you you never under any circumstances do is make changes to the dev (or any other) datbase, using the User interface.
Try the patching engine found in DBSourceTools.
http://dbsourcetools.codeplex.com
DBSourceTools is a utility to help developers get their databases under source control.
Simply point it at a Source Database, and it will script all database objects, incuding data to disk.
Once you have a Target database (v1), you can then place your patch scripts int the patches directory, and DBSourceTools will run these patches in order after re-creating your database.
This is a very effective means of thoroughly testing your change scripts.
here's a more general question on how you handle database schema changes in a development team.
We are a team of developers and the databases used during development are running locally on everyone's box as we want to avoid the requirement to have web access all the time. So running a single central database instance somewhere is not a real option.
Whenever one of us decides that it is time to extend/change the db schema, we mail database files (MYI/MYD) or SQL files to execute around, or give others instructions on the phone what they need to do to get the changed code running on their local DBs. That's not the perfect approach for sure. The same problem arises when we need to adjust the DB schema on staging or production once a new release is ready.
I was wondering ... how do you guys handle this kind of stuff? For source code, we use SVN.
Really appreciate your input!
Thanks,
Michael
One approach we've used in the past is to script the entire DDL for the database, along with any test/setup data needed. Store that in SVN, then when there's a change, any developer can pull down the changes, drop the database, and rebuild it from the script files.
At the very least you should have the scripts of all the objects in the database (tables, stored procedures, etc) under source control.
I don't think mailing schema changes is a real option for a professional development team.
We had a system on one of my previous teams that was the best I've encountered for dealing with this situation.
The nightly build of the application included a build of a database (SQL Server). The database got built to the Test DB server. Each developer then had a DTS package (this was a while ago, and I'm sure they upgraded to SSIS packages) to pull down that nightly DB build to their local DB environment.
This kept the master copy in one location and put the onus on the developers to keep their local dev databases fresh.
At my work, we deal with pretty large databases that are time-consuming to generate, so for us, starting from scratch with a new DB isn't ideal. Like Harper, we have our DDL in SVN. Additionally, we store a version number in a database table. Every check-in that changes the DB must be accompanied by a script that:
Will upgrade the database schema and modify any existing data appropriately, and
Will update the version number in the database.
Further, we number the scripts and database versions such that a script we've written knows how to upgrade further along a branch or from an older branch to a newer one without any input from the developer (apart from the database name and the directory to the upgrade scripts).
Thus, if I've got a copy of a customer's 4GB DB that's from a year old version and I want to test how their data will work with the version we cut yesterday, I can just run our script and let it handle the upgrades rather than having to start from scratch and redo every INSERT, UPDATE and DELETE performed since the database was created.
We have a non-SQL description of the database schema. When the application starts, it compares the desired database schema with the actual database schema, and performs whatever ADD TABLE, ADD COLUMN, ADD INDEX, etc. statements it needs to do to get the database to look right.
This doesn't handle every case; sometimes you have to delete the database and recreate if if you've changed something that the schema resolver can't handle, but most of the time we don't need to worry about it.
I'd certainly keep the database schema in source code control.
At my present job, every time there's a schema change, we write the SQL for the change (alter table xyz add column ...) and put it in SVN. Then developers can update test databases by running this script. It's pretty clumsy but it works.
At a previous job I wrote some code that at application start-up would automatically compare the actual database schema to what it expected, and if it was not up to date perform the updates. Mostly this was done for deployment reasons: When we shipped new copies of the software, it would then automatically update the user's database. But it was also handy for developers.
I think there should be some generic SQL tool to do this. Maybe there is, but I've never seen one.
I am writing code to migrate data from our live Access database to a new Sql Server database which has a different schema with a reorganized structure. This Sql Server database will be used with a new version of our application in development.
I've been writing migrating code in C# that calls Sql Server and Access and transforms the data as required. I migrated for the first time a table which has entries related to new entries of another table that I have not updated recently, and that caused an error because the record in the corresponding table in SQL Server could not be found
So, my SqlServer productions table has data only up to 1/14/09, and I'm continuing to migrate more tables from Access. So I want to write an update method that can figure out what the new stuff is in Access that hasn't been reflected in Sql Server.
My current idea is to write a query on the SQL side which does SELECT Max(RunDate) FROM ProductionRuns, to give me the latest date in that field in the table. On the Access side, I would write a query that does SELECT * FROM ProductionRuns WHERE RunDate > ?, where the parameter is that max date found in SQL Server, and perform my translation step in code, and then insert the new data in Sql Server.
What I'm wondering is, do I have the syntax right for getting the latest date in that Sql Server table? And is there a better way to do this kind of migration of a live database?
Edit: What I've done is make a copy of the current live database. Which I can then migrate without worrying about changes, then use that to test during development, and then I can migrate the latest data whenever the new database and application go live.
I personally would divide the process into two steps.
I would create an exact copy of Access DB in SQLServer and copy all the data
Copy the data from this temporary SQLServer DB to your destination database
In that way you can write set of SQL code to accomplish second step task
Alternatively use SSIS
Generally when you convert data to a new database that will take it's place in porduction, you shut out all users of the database for a period of time, run the migration and turn on the new database. This ensures no changes to the data are made while doing the conversion. Of course I never would have done this using c# either. Data migration is a database task and should have been done in SSIS (or DTS if you have an older version of SQL Server).
If the databse you are converting to is just in development, I would create a backup of the Access database and load the data from there to test the data loading process and to get the data in so you can do the application development. Then when it is time to do the real load, you just close down the real database to users and use it to load from. If you are trying to keep both in synch wile you develop, well I wouldn't do that but if you must, make a nightly backup of the file and load first thing in the morning using your process.
You may want to look at investing in a tool like SQL Data Compare.
I believe it has support for access databases too, and you can download a trial.
I you are happy with you C# code, but it fails because of the constraints in your destination database you temporarily can disable them and then enable after you copy the whole lot.
I am assuming that your destination database is brand new DB with no data, and not used by anyone when the transfer happens
It sounds like you have two problems:
You're migrating data from one database to another.
You're changing your schema.
Doing either of these things is tricky if you are trying to migrate the data while people are using the data.
The simplest approach is to migrate the data based on a static copy of the data, and also to queue updates to that data from the moment you captured the static copy. I don't know how easy this is in Access, but in SQLServer or Oracle you can use the redo logs for this or a manual solution using triggers. The poor-man's way of doing this is to make triggers for all the relevant tables that log the primary key of the records that have changed. Then after the old database is shut off you can iterate over those keys and get those records from the old database and put them into the new database. Just copy the whole record; if the record was deleted then delete it from the new database.
Your problem is compounded by the fact that you can't simply copy the data, you have to transform it. This means you probably have to shut down both databases and re-migrate the records based on the change list. It will take a lot of planning to ensure you get things right and I'd recommend writing a testing script that can validate that the resulting data is correct.
Also I'd ensure that the code for the migration runs inside one of the databases if possible. Otherwise you are copying the data twice and this will significantly harm the performance.