Backup only new or edited records - sql-server-express

I have built a SQL Server Express database that is going to be housed on an external hd. I need to be able to add/update data on the database that is on my system, as well as other systems and then only backup or transfer data that has been added or edited to the external hard drive. What is the best way to accomplish this?

You would probably use replication for this but as you're using SQL Server express this isn't an option.
You'll need some sort of mechanism to determine what has changed between backups. So each table will need a timestamp or last updated date time column that's updated every time a record is inserted or updated. It's probably easier to update this column from a trigger rather than from your application.
Once you know which records are inserted or updated then it's just a matter of searching for these from the last time the action was performed.
An alternative is to add a bit column which is updated but this seems less flexible.

Sherry, please explain the application and what the rationale is for your design. The database does not have any mechanism to do this. You'll have to track changes yourself, and then do whatever you need to do. SQL Server 2008 has a change tracking feature built in, but I don't think that will help you with Express.
Also, take a look at the Sync Framework. Adding this into your platform is a major payload, but if keeping data in sync is one of the main objectives of your app, it may pay off for you.

In an application
If you are doing this from an application, every time a row is updated or inserted - modify a bit/bool column called dirty and set to true. When you select the rows to be exported, then select only columns that have dirty set to true. After exporting, set all dirty columns to false.
Outside an application
DTS Wizard
If you are doing this outside of an application, then run this at the Command-Line:
Run "C:\Program Files\Microsoft SQL Server\90\DTS\Binn\DTSWizard.exe"
This article explains how to get the DTS Wizard (it is not included as default).
It is included in the SQL Server
Express Edition Toolkit – and only
that. It you have installed another
version of SSE, it works fine to
install this package afterwards
without uninstalling the others. Get
it here:
http://go.microsoft.com/fwlink/?LinkId=65111
The DTS Wizard is included in the
option “Business Intelligence
Development Studio” so be sure to
select that for install
If you have installed another version
of SSE, the installer might report
that there is nothing to install.
Override this by checking the checkbox
that displays the version number (in
the installer wizard)
After install has finished, the DTS
Wizard is available at
c:\\Microsoft SQL
Server\90\DTS\Binn\dtswizard.exe you
might want to make a shortcut, or even
include it on the tools menu of SQL
Studio.
bcp Utility
The bcp utility bulk copies data between an instance of Microsoft SQL Server and a data > file in a user-specified format. The bcp utility can be used to import large numbers of > new rows into SQL Server tables or to export data out of tables into data files. Except > when used with the queryout option, the utility requires no knowledge of Transact-SQL.
To import data into a table, you must either use a format file created for that table or > understand the structure of the table and the types of data that are valid for its
columns.

Related

SQL Server database : amalgamate 90 database update scripts into a single script

I have an application that has been released for several years. I have updated the SQL Server database around 90 times and each time I have created an update SQL script, so when the user runs the new software version the database is updated.
I would like to amalgamate all the updates into a single SQL script to make deployment faster and simpler, is there an easy way to do this? I presume I could simply grab the latest database after it has run through all 90 updates via SQL Server Management Studio?
EDIT
For clarification; the software I wrote automatically applies new database updates when the user downloads the latest version. This is done via C# / .Net and look for an embedded sql script on startup in the format XX_update.sql calling each script one by one i.e.
1_update.sql - this creates the tables and initial data etc. This was my initial release database.
2_update.sql - updates to the initial database such as adding a new SP or changing column datatype etc
3_update.sql
4_update.sql
...
90_update.sql (4 years and lots of program updates later!).
Ideally, I would install my software and create a brand new database running through all 90 update scripts. Then take this database and convert it into a script which I can replace all 90 scripts above.
This is too long for a comment.
There is no "easy" way to do this. It depends on what the individual updates are doing.
There is a process you can follow in the future, though. Whenever an update occurs, you should maintain scripts both for incremental updates and for complete updates. You might also want to periodically introduce major versions, and be able to upgrade to and from those.
In the meantime, you'll need to build the complete update by walking through the individual ones.
I use a similar system at work and while I prefer to run the scripts separately I have amalgamated several scripts sometimes when they have to be deployed by another person with no problem.
In SQL Server the rule is that as long as you separate the scripts by go and use SQL Server Management Studio or another tool that process the batch separator properly there is no problem in amalgamating it, because it would look like separate scripts to SQL Server (instead of being sent to SQL Server as a one big script the tool send it in batches using the go as the batch separator).
The only problem is that if you have an error in the middle of the script, the tool would continue sending the rest of batches to the server (at least by default, maybe there is an option for changing this). That is why I prefer when possible using a tool to run then separately, just to err on the safer side and stop if there is a problem and locate easily the problematic command.
Also, for new deployments your best option is what you say of using a database that is already updated instead of using the original one and apply all the updates. And to prevent being in this situation again you can keep a empty template database that is not used for any testing and update it when you update the rest of databases.
Are you running your updates manaually on the server. Instead, you can create a .bat file, powershell script or a exe. Update scripts in your folder can be numbered. The script can just pick up updates one by one and execute it over the database connection instead of you doing it.
If you have multiple script files and you want them to combine into one,
rename these as txt, combine, rename the resulting file as sql.
https://www.online-tech-tips.com/free-software-downloads/combine-text-files/
This is easy. Download SQL Server Data Tools and use that to create the deployment script. If there's no existing database, it will create all the objects, and if targeting an older version it will perform a diff against the target database and create scripts that create the missing objects, and alter the existing ones.
You can also generate scripts for the current version using SSMS, or use SSMS to take a backup, and use a restore to in an install.

VB.NET - Local database (SQL Server Compact 3.5 database) data gone after update?

I have created an application in VB.NET (using Microsoft Visual Basic 2010 Express) with a local database (SQL Server Compact 3.5 database) to store data.
I have installed this on the users computer, and added a "search online for updates" functionallity (which can be selected when publishing)
Now i have noticed, that sometimes when i upload a new version, the data from the database gets cleared. (possibly when i opened the dtb while developing)
This is offcourse not how i want the system to behave, and the data should always remain on the users computer.
In 'Application Files' the database file (*.sdf) is currently set to 'Data File (Auto)', but i'm unsure of the exact way this works.
Could anyone help me to understand how all of this works, and tell me how i can be sure that the data in the users database will remain, even after an update?
If there is no solution to ensure this, is there a way to safely backup the data and reload it?
Thanks in advance!!
Basically, the click one install overwrites everything in the program folder that is included in your publish. So if you include the .sdf then it will overwrite it when the installer is executed. What you need to do is select "exclude" on the sdf instead. This will keep the database in its previous state.
So my recommendation would be to have 2 different publishes. One that you create that contains the .sdf which is only used on first time installation, and then in all the update ones you exclude it.
To perform updates on your tables you would have to write the SQL for it in your software. Basically do a check on all tables to see that they have the proper setup on startup. If they don't then you add the missing columns.
Hope this helps.

SQL SERVER Project

My Application Database Without Project and without Source safe, i planned to make my DB to be as project and add it to TFS, but I have no idea how to script the stored procedures, Triggers, Views, Functions, and what is the best practice to Make Update Script for All My stored procedures, Triggers, Views, and Functions to My customers DB.
The best procedure (IMHO) is to manually maintain a strict version of your schemas. Then when you need to make changes you write a delta script to move from one version to the next. I suggest you write the DDL scripts by hand -- you can make them concise and comment them.
You can use a tool like Visual Studio Team System for database architects, take a look at Running static code analysis on SQL Server database with Visual Studio Team System for database architects it will show you how to import the data, disregard the static code analysis that comes later it does not apply to your question
I've found a good way to get SQL scripts into SCM from an existing database is to use SMSS's "export all to script" option or whatever it's called, can't remember now.
Then every other change you add the change script into your SCM with a different version number in the file name.
Every release (or set cycle depending on your development/release methodology) you apply all change scripts, then re-script the entire database, tag it, and start again.
The best way to do it - save the database in TFS as set of database creation script, i.e. MyTable table should be added to TFS as MyTable.sql file (CREATE TABLE...) etc. We are using SQL Examiner to do this - see the following article: How to keep your database under version control
We are working with SVN and I never tested SQL Examiner with TFS, but I know that the tool supports TFS.

Creating a CHANGE script in Management Studio?

I was wondering if there is a way to automatically append to a script file all the changes I am making to my columns, tables, relationships etc...
The thing is I am doing a lot of different changes on a TEST db and the idea will be to apply this change script when I move the test db to production... hence keeping production data but applying all schema and object changes.
Is there an easy way to do this? Can it also migrate database diagram changes?
I have seen how you can create a change script each time I do a change but this means I have to copy and paste into a master file. Actually pretty easy!
I was just wondering if I was missing something?
Do not make changes to the test server using the UI. Write scripts and keep them under source control. You can test your scripts starting from backups of the live data and you can tune yoru scripts untill they achieve the desired result. Then you can check in the scripts for reference and later apply them on the live server. See this article Version Control and Your Database.
BTW, check out the SSMS toolpack, I think it may do what you want (I'm not sure). My advice stand none the less: version your schema, use explicitly created/saved scripts, use source control.
There's no way to directly generate a "delta" script in SSMS.
However, if every time you publish changes, you script out the entire database, including data, to SQL using the SQL Server Database Publishing Wizard you should be able to extract diffs between the versions and get your deltas that way.
If money is no object, you can purchase Visual Studio Team System Database Architect edition and use its fantastic database comparison tools to generate and version control exactly the diffs you want.
Try using TableDiff , that came with SQL Server 2005.
SQL Server 2005 TableDiff Utility
tablediff Utility
We have the process where when a developer gets done with a change, they then script it out and check it into Subversion. In Subversion we have a folder for Tables, Stored Procs, Data, etc. They script it out so it is repeatable (i.e. don’t insert the new data if it is already there.) This is important to do anyway so you keep the history of changes for a given object in the database.
In the past, we would just enter each of the files that we wanted scripted out into a text file (i.e. FileListV102.txt). When we were ready to make a release we would do “get latest” on all of the files (from VSS back then.) We then had a simple utility that would read the “file list” file and open each of those files in turn concatenating them into an output file. That is pretty easy to code.
We outgrew that and now we have a release management tools (which can be found here and will be on sale mid September), that takes all of the files and creates a big SQL script file out of it. It does it in the order that you would expect based on the folder names – so files found in the "Tables" folder are done before those in the "Data" folder, etc.
Either way, once you are done you have a big SQL script file that you can then apply to a fresh copy of production and that is what you test against.
I know I'm way late to the party, but I just wanted to add that there are tens of third party products out there. Some are very good, some are very cheap or free, and some are a mixture. I listed 22 here:
http://bertrandaaron.wordpress.com/2012/04/20/re-blog-the-cost-of-reinventing-the-wheel/
We have been using a relatively new software called Kal Admin.
It has Change Management feature and let distributing selected changes to other databases very easily. We used to do it by comparing two databases but it not satisfy our need for change tracking.
BTW Kal Admin has Metadata and data compare capabilities as well.

What is the best way to transfer a table or tables from one SQL server to another?

I have been developing in VB.NET and SQL Server 2008 for a while now, but haven't got into live installs yet. In the database system I used be on it had the ability to archive multiple tables into a .dga file, as it was called. I could then restore the .dga file into another database or on another server.
I'm looking for the easiest way to accomplish something similar in SQL Server.
If you want to transfer specific tables, then using Data Transformation Services (right click on the database in SQL Server Management studio and select "Import Data" and it will bring the dialog up for it). Of course, this assumes that you have both databases available to you.
If you are comfortable with replacing the database as a whole, you can easily backup the database and then restore it into a new one through SQL Server Management studio (or through calling the appropriate SP).
I would go for one of the following :
From MS SQL Management Studio, right click on the database / Tasks / Generate scripts
From Visual Studio, in the Server Explorer tab, "publish to provider"
Both will launch a wizard allowing you to export the tables you want the way you want (including data or not, creation scripts or not, etc etc.)
If you want to move tabless without data, the simpliest thing is to script the tables you want and run the script.
We script all our db changes and commit them to subversion and then run them as part of the deplyment process.
If you want to put the whole database on prod including data (scrub out test records first!), then do a backup and restore onthe other server.
For future changes, wescript all our db changes and commit them to subversion and then run them as part of the deployment process. There also are tools that look at the structural differnces bewteen the two servers and creates scripts. REd-Gate's SQL Compare is really good for this.
In addition to HLGEM's suggestions, you can look into SSIS if this is an ongoing process.