I have a folder in TFS which has SQL Scripts. At the moment I am manually adding a comment and updating a version number inside the comment every time i make a change and check it back it. This works however was hoping there might be a better way. Is there a way to automate this in TFS?
I have read the following article
Version control project files
do i have to go through such a process for simple .sql files? Are there any other simple ways.
There are a few ways you can do this:
Create an automated build in TFS and write a custom build step / PowerShell script to parse the appropriate SQL scripts, read the version, increment it, and store the new version by either checking in the updated file or a local store
Use a database project (part of SQL Server Data Tools) which will output a DACPAC. Inside the database project, you can set the version as specified here. This stores the version in the project file. If you update your TFS build number to be digits only, you can then update the project file to set that value to match the build using a custom build task. For example, if your build number was yyyy.m.d.R where R is the number of times that build was run today (TFS manages that - it's the revision variable). Or, you could set the the <DacVersion> tag to something like 2.1.0.0 and your build replaces the last digit with yyyymmddr.
I'd recommend using a database project. It's pretty easy to create a new database project off an existing database.
The first way mentioned by Jacob above can achieve that if you just want to incremental the version number of the script/folder, just create a CI build definition.
Actually you can just enable Label sources and set the Label format with predefined environment variables such as $(build.buildNumber), and set without publish any artifacts during build process.
Thus, it will automatically trigger the CI build when you check in files, and the source (SQL Script /folder) will be labeled with the incremental number.
Then you can find the specific versions with the label.
I have the following variables defined:
Now once a build is complete (the last step in the build process), I want to update the VersionRevision variable, basically increment it.
So I'm looking for an API I can call from C# and create a console application or a powershell script to edit the build definition (if I have to do this)?
You can use VSTS Rest API to update the variable value in Build Definition. Both Console Application and Powershell Script is OK for this.
If I understand correctly, you want to get these build variables and them assignment them as your version number.
After the build completes, update and increment the VersionRevision. It's not a good way and seems not available to achieve it.
In TFS build there is a $(Rev:.r) which means
Use $(Rev:.rr) to ensure that every completed build has a unique name.
When a build is completed, if nothing else in the build number has
changed, the Rev integer value is incremented by one.
Source: Specify general build definition settings
To version your assemblies you could just add an powershell script in your build definition, detail ways to achieve please follow this link from MSDN: Version your assemblies
And usually we only define and assignment variables with the Major and Minor version. If you want to change the value of them. You may need manually edit the build definition.
More related link about how to manage version numbers as part of your vNext builds.
vNext Build Awesomeness – Managing Version Numbers
Generate custom build numbers in TFS Build vNext
We've been using TFS since around 2009 when we installed TFS2008. We upgraded to TFS2010 at some point and we've been using it for source control, work item management, builds etc.
Our TFSVersionControl.mdf file is 287,120,000 KB (273GB). We ran some queries and found that our tbl_BuildInformationField table is massive. It has 1,358,430,452 rows which takes up 150,988,624 KB (143GB). We have multiple active products over multiple active builds which more than one solution per build and the solutions aren't free of warning messages.
My questions:
Is it possible to stop MSBuild from spamming the
tbl_BuildInformationField table so much? I.e. only write errors and
general build information and not all the warnings for every
project?
Is there a way to purge or clean up old data from this
table?
Is 273GB for 4 years of TFS use an average size?
Is 143GB for tbl_BuildInformationField a "normal" size?
The table holds the values and output of build process. Take note that build retention policy doesnt actualy delete the build object like everything else in TFS the object is marked deleted and only public visibility and drop location is cleared.
I would suggest if you have retainened same build definitions for very long time (when build definition is deleted the related objects get removed as well) you should query for build info including deleted ones using TFS api, the same api will also alow you to remove them for good. Deleting build definitions probably will not work and will fail with timeout error.
You can consult the following:
http://blogs.msdn.com/b/adamroot/archive/2009/06/12/working-with-deleted-build-data-in-team-foundation-server-2010-beta-1.aspx
I have managed to do Continuous deployment for my Web project using TFS Msbuild.
I have goggled for few hours but couldn't find a relative link to achieve Continuous Deployment for windows service.
Possible to do CD for windows service using TFS Build Definitions? i.e for every check in below steps should be performed, I am using TFS2010 with Windows Server 2008 R2
1] Stop Service,
2] Copy respective Project folder from (Source) Build server to (Destination Server)'staging server1' or 'staging server2'
3] Start Services (willing to do this step manually)
Any blog,tutorial references to achieve this? My guess is need to use Power shell scripts but not sure.
Should be ok, you'll need to install an agent on the box you're deploying to. And you'll need to be able to exit the XAML templates (you'll probably want to copy your existing template that does your build and just add the stop/copy/start stuff onto the end of it).
After your CI build, you'll need to edit it (the XAML template) to start and stop the service you can use the "invoke process" activity (you'll probably want to do something like make it generic and pass in the service name as an argument - note you can change the display names etc in the Metadata argument so it appears meaningful in your build definition).
As far as copying stuff across goes, you can do this fairly easily by accessing properties like the drop location.
Should be fairly straight forward - once you get your head round modifying the templates!
Edit:
Sorry for not responding sooner, I'd have to revise my earlier comment, this isn't as straight forward as it seems unless you really know what you want, I have been thinking about this and like skinning cats, there are more than one ways to achieve this... I've rewritten this a few times so I hope the edit's make sense :)
Boils down to the following:
1) Pass into your template the build agent/machine you want to run this on (this can be done as a simple string, or as an AgentReservationSpec - up to you), since it's unlikely to be the machine that you run your actual CI build on. This is done in the Arguments section of the XAML, as noted before, if you want to edit the display name/description you can edit the Metadata Argument. This machine needs a TFS agent installed of course.
2) Run the task on the remote machine, this is done by adding the Agent Scope activity into your template, you will have to use the info from step 1 to get the ReservationSpec (so would be easier if you add the argument as an AgentReservationSpec or you'll need to resolve this in the template)
2.1) Run the stop/uninstall, this is done via dropping in a (two actually) Invoke Process activity, Invoke Process can take arguments and you need to point it to the executable you're executing, so you'll want to use this, one for the NET command (i.e. NET STOP ), and one for InstallUtil.exe.
2.2) Copy the files from your CI to the remote server, you can use the Copy Directory activity for this, it needs a couple of parameters, the main one is the source location, you should be able to drop in a GetBuildDetail activity, give it a name then reference .DropLocation to get this, destination is wherever you like you're installing to.
2.3) Install the new service as step 2.1, you need to use Invoke Process to install the service, then you can use another to start the service up.
I haven't covered everything, but I haven't set this up myself so I'm sure there are a few pitfalls or things I haven't though of. Off the top of my head this makes sense, but maybe someone that knows better can poke a few holes in it :)
I'm using SQL-Server 2008 with Visual Studio Database Edition.
With this setup, keeping your schema in sync is very easy. Basically, there's a 'compare schema' tool that allow me to sync the schema of two databases and/or a database schema with a source-controlled creation script folder.
However, the situation is less clear when it comes to data, which can be of three different kind :
static data referenced in the code. typical example : my users can change their setting, and their configuration is stored on the server. However, there's a system-wide default value for each setting that is used in case the user didn't override it. The table containing those default settings grows as more options are added to the program. This means that when a new feature/option is checked in, the system-wide default setting is usually created in the database as well.
static data. eg. a product list populating a dropdown list. The program doesn't rely on the existence of a specific product in the list to work. This can be for example a list of unicode-encoded products that should be deployed in production when the new "unicode version" of the program is deployed.
other data, ie everything else (logs, user accounts, user data, etc.)
It seems obvious to me that my third item shouldn't be source-controlled (of course, it should be backuped on a regular basis)
But regarding the static data, I'm wondering what to do.
Should I append the insert scripts to the creation scripts? or maybe use separate scripts?
How do I (as a developer) warn the people doing the deployment that they should execute an insert statement ?
Should I differentiate my two kind of data? (the first one being usually created by a dev, while the second one is usually created by a non-dev)
How do you manage your DB static data ?
I have explained the technique I used in my blog Version Control and Your Database. I use database metadata (in this case SQL Server extended properties) to store the deployed application version. I only have scripts that upgrade from version to version. At startup the application reads the deployed version from the database metadata (lack of metadata is interpreted as version 0, ie. nothing is yet deployed). For each version there is an application function that upgrades to the next version. Usually this function runs an internal resource T-SQL script that does the upgrade, but it can be something else, like deploying a CLR assembly in the database.
There is no script to deploy the 'current' database schema. New installments iterate trough all intermediate versions, from version 1 to current version.
There are several advantages I enjoy by this technique:
Is easy for me to test a new version. I have a backup of the previous version, I apply the upgrade script, then I can revert to the previous version, change the script, try again, until I'm happy with the result.
My application can be deployed on top of any previous version. Various clients have various deployed version. When they upgrade, my application supports upgrade from any previous version.
There is no difference between a fresh install and an upgrade, it runs the same code, so I have fewer code paths to maintain and test.
There is no difference between DML and DDL changes (your original question). they all treated the same way, as script run to change from one version to next. When I need to make a change like you describe (change a default), I actually increase the schema version even if no other DDL change occurs. So at version 5.1 the default was 'foo', in 5.2 the default is 'bar' and that is the only difference between the two versions, and the 'upgrade' step is simply an UPDATE statement (followed of course by the version metadata change, ie. sp_updateextendedproperty).
All changes are in source control, part of the application sources (T-SQL scripts mostly).
I can easily get to any previous schema version, eg. to repro a customer complaint, simply by running the upgrade sequence and stopping at the version I'm interested in.
This approach saved my skin a number of times and I'm a true believer now. There is only one disadvantage: there is no obvious place to look in source to find 'what is the current form of procedure foo?'. Because the latest version of foo might have been upgraded 2 or 3 versions ago and it wasn't changed since, I need to look at the upgrade script for that version. I usually resort to just looking into the database and see what's in there, rather than searching through the upgrade scripts.
One final note: this is actually not my invention. This is modeled exactly after how SQL Server itself upgrades the database metadata (mssqlsystemresource).
If you are changing the static data (adding a new item to the table that is used to generate a drop-down list) then the insert should be in source control and deployed with the rest of the code. This is especially true if the insert is needed for the rest of the code to work. Otherwise, this step may be forgotten when the code is deployed and not so nice things happen.
If static data comes from another source (such as an import of the current airport codes in the US), then you may simply need to run an already documented import process. The import process itself should be in source control (we do this with all our SSIS packages), but the data need not be.
Here at Red Gate we recently added a feature to SQL Data Compare allowing static data to be stored as DML (one .sql file for each table) alongside the schema DDL that is currently supported by SQL Compare.
To understand how this works, here is a diagram that explains how it works.
The idea is that when you want to push changes to your target server, you do a comparison using the scripts as the source data source, which generates the necessary DML synchronization script to update the target. This means you don't have to assume that the target is being recreated from scratch each time. In time we hope to support static data in our upcoming SQL Source Control tool.
David Atkinson, Product Manager, Red Gate Software
I have come across this when developing CMS systems.
I went with appending the static data (the stuff referenced in the code) to the database creation scripts, then a separate script to add in any 'initialisation data' (like countries, initial product population etc).
For the first two steps, you could consider using an intermediate format (ie XML) for the data, then using a home grown tool, or something like CodeSmith to generate the SQL, and possible source files as well, if (for example) you have lookup tables which relate to enumerations used in the code - this helps enforce consistency.
This has another benefit that if the schema changes, in many cases you don't have to regenerate all your INSERT statements - you just change the tool.
I really like your distinction of the three types of data.
I agree for the third.
In our application, we try to avoid putting in the database the first, because it is duplicated (as it has to be in the code, the database is a duplicate). A secondary benefice is that we need no join or query to get access to that value from the code, so this speed things up.
If there is additional information that we would like to have in the database, for example if it can be changed per customer site, we separate the two. Other tables can still reference that data (either by index ex: 0, 1, 2, 3 or by code ex: EMPTY, SIMPLE, DOUBLE, ALL).
For the second, the scripts should be in source-control. We separate them from the structure (I think they typically are replaced as time goes, while the structures keeps adding deltas).
How do I (as a developer) warn the people doing the deployment that they should execute an insert statement ?
We have a complete procedure for that, and a readme coming with each release, with scripts and so on...
First off, I have never used Visual Studio Database Edition. You are blessed (or cursed) with whatever tools this utility gives you. Hopefully that includes a lot of flexibility.
I don't know that I'd make that big a difference between your type 1 and type 2 static data. Both are sets of data that are defined once and then never updated, barring subsequent releases and updates, right? In which case the main difference is in how or why the data is as it is, and not so much in how it is stored or initialized. (Unless the data is environment-specific, as in "A" for development, "B" for Production. This would be "type 4" data, and I shall cheerfully ignore it in this post, because I've solved it useing SQLCMD variables and they give me a headache.)
First, I would make a script to create all the tables in the database--preferably only one script, otherwise you can have a LOT of scripts lying about (and find-and-replace when renaming columns becomes very awkward). Then, I would make a script to populate the static data in these tables. This script could be appended to the end of the table script, or made it's own script, or even made one script per table, a good idea if you have hundreds or thousands of rows to load. (Some folks make a csv file and then issue a BULK INSERT on it, but I'd avoid that is it just gives you two files and a complex process [configuring drive mappings on deployment] to manage.)
The key thing to remember is that data (as stored in databases) can and will change over time. Rarely (if ever!) will you have the luxury of deleting your Production database and replacing it with a fresh, shiny, new one devoid of all that crufty data from the past umpteen years. Databases are all about changes over time, and that's where scripts come into their own. You start with the scripts to create the database, and then over time you add scripts that modify the database as changes come along -- and this applies to your static data (of any type) as well.
(Ultimately, my methodology is analogous to accounting: you have accounts, and as changes come in you adjust the accounts with journal entries. If you find you made a mistake, you never go back and modify your entries, you just make a subsequent entries to reverse and fix them. It's only an analogy, but the logic is sound.)
The solution I use is to have create and change scripts in source control, coupled with version information stored in the database.
Then, I have an install wizard that can detect whether it needs to create or update the db - the update process is managed by picking appropriate scripts based on the stored version information in the database.
See this thread's answer. Static data from your first two points should be in source control, IMHO.
Edit: *new
all-in-one or a separate script? it does not really matter as long as you (dev team) agree with your deployment team. I prefer to separate files, but I still can always create all-in-one.sql from those in the proper order [Logins, Roles, Users; Tables; Views; Stored Procedures; UDFs; Static Data; (Audit Tables, Audit Triggers)]
how do you make sure they execute it: well, make it another step in your application/database deployment documentation. If you roll out application which really needs specific (new) static data in the database, then you might want to perform a DB version check in your application. and you update the DB_VERSION to your new release number as part of that script. Then your application on a start-up should check it and report an error if the new DB version is required.
dev and non-dev static data: I have never seen this case actually. More often there is real static data, which you might call "dev", which is major configuration, ISO static data etc. The other type is default lookup data, which is there for users to start with, but they might add more. The mechanism to INSERT these data might be different, because you need to ensure you do not destoy (power-)user-created data.