How to interpret merge information in TFS log output (or: how can I know which changesets is part of a build?) - testing

First the question, then some background.
We're using Visual Studio 2008, C# 3.0 and .NET 3.5, and TFS 2008 as our VCS.
If I execute this command against our TFS database, to show information about a merge commit:
tf changeset 13469 /noprompt
I get output like this (redacted):
Changeset: 13469
User: Lasse
Date: 12. november 2010 14:06:06
Comment:
Some text here.
Items:
merge, edit $/path/to/target/filename.txt
... more merged files
... some blurb about reviewer texts, etc. nothing important/useful here
This was merged from a different path in the same database, but this information is not available here.
For instance, if I merged from $/path/to/main/ down to $/path/to/branch/, the path to the main project is not available in the merge changeset. (note, please don't say that I'm merging the wrong way, it doesn't matter in this case so I just made it simple.)
So, the question is this: Is there any way I can find out where that changeset was merged from? Which branch it came from? ... and which changeset it originated as in that branch (like 13468? 13462? 13453? ...)
Background
We haven't used much branching and merging so far, except for simple stuff like "tagging" a release.
From now on we're looking at using branching much more active, but this creates a challenge.
Let's say I open up our bug tracker, take the topmost bug, fixes it, and checks it in. This is done in one branch, let's say this is the master branch.
Now, at some point, a tester is going to verify that the hotfix we're going to release has this bug fixed, so he opens up our product and wants to verify before he starts that the bugfix has actually gone into this build.
When we didn't use branching, we simply took the changeset number of the commit that ultimately fixed a case and typed that into the case itself. Additionally, our product was built with a build-number (4th part of version number) identical to the changeset that was the latest changeset that became part of the build.
This way, the tester could simply look at the case, the version number and easily deduce if the build had that changeset or not. If the changeset number in the version number was equal to or higher than the one in the case, the changeset was part of that build.
With branches, that doesn't work. If I commit changeset X on the master branch, but forget to merge, the tester can't simply say "If I run version X or higher, I go that fix" any more.
Note that we're not using TFS work items, so there's no easy built-in way to link commits and cases.
The reason I asked about the TFS history output was that I assume that if I can see that changeset 13469 really came from another branch, and corresponds to changeset 13462 there, and the programmer has noted 13462 on the case, I can say "13462 is now part of the build, because it was merged to the right branch, became 13469, and the build output has version 13470."
In other words, I could build a tool that as part of the build looked at the history of the database and grabbed all the necessary information and stored it in a database, so that I could take cases on our ready-to-test list and compare against the version number of the executable the tester was running, and just list all cases that is both ready to test and part of that build.
So my question is really this: Does anyone have any hints to how we can solve this? Perhaps we're boneheaded and needs to be told the right way to do this, so if you got any good ideas, let me know.

I hear and feel your lament here, as we've run into the same limitation. With TFS 2008, there's no easy way to see that history. With TFS 2010, and the branch visualizer, it gets easier.
If this is something you really need, you could potentially write it yourself using the TFS API. You would have to walk your way back through the various changesets for the files. It would be relatively straightforward to code:
Get merge changeset
Get prior merge changeset
Determine merge source from the first changeset
Get history for the file between the dates of the two changesets.
I've done this manually before, but you could either do this in C# code, or, alternatively, write a PowerShell script to do this.

Related

Multi-Version Code Support in Git

I have been working on a big SQL based project that is taking an increasing amount of time and effort to maintain its versions. Lets keep it simple. I have three folders for each version of the code called Ver1, Ver2, and Ver3. All three version folders have the exact same filenames within it, but their content differs from version to version. If I make a change to a particular file in Ver3 that exists in Ver2 and Ver1, how can I use Git not to necessarily make the same changes in those other versions (not always practical due to partial rewrites for performance or logic changes), but to let me know that the other two versions of the file need to be updated in order to catchup to the Ver3? If Git isn't suited for this task, or if you have any experience with a similar issue, I would much appreciate any suggestions.

Can Liquibase or Flyway handle multi non-linear versioning scenario?

Here is a tough one.
v1.1 has a table with index i.
v2.1 contains this table and index as well.
A bug was discovered and in v1.1.0.1 we changes the code and as a result, decided to drop the index.
We created a corresponding patch for v2.1, v2.1.0.6.
The customer applied patch v1.1.0.1 and a few weeks later upgraded to v2.1 (without patch 6)
As v2.1 code base performs better with the index we have a "broken" application.
I can't force my customers to apply the latest patch.
I can't force the developers to avoid such scenarios.
Can Liquibase or Flyway handle this scenario?
I guess these kind of problems are more organizational and not tool-specific. If you support multiple Version (A branch 1.0 and a newer one 2.0) and provide patches for both (which is totally legitimate approach - don't get me wrong here) you will probably have to provide upgrade notes for all these versions and maybe a matrix that shows from which version to which you can go (and what you can't do).
I just happened to upgrade an older version of Atlassian's Jira Bugtracker and had to find out that they do provide upgrade notes for all versions.
That would have meant to go from one version to the next to finally arrive at the latest version (I was on version 4.x and wanted to go to the latest 5.x) and obey all upgrade notes in between. (Btw, I skipped all this and set it up as a complete fresh installation to avoid this.)
Just to give you an impression, here is a page that shows all these upgrade notes:
https://confluence.atlassian.com/display/JIRA/Important+Version-Specific+Upgrade+Notes
So I guess you could provide a small script that recreates the index if somebody wants to go from version 1.1.0.1 to 2.1 and state in upgrade notes that it needs to be applied.
Since you asked if liquibase (or flyway) can support this, maybe it is helpful to mention that liquibase (I only know liquibase) has a something called preConditions. Which means you can run a changeset (resp. an sql) based on the fact that an (e.g.) index exists <indexExists>.
That could help to re-create the index if it is missing.
But since version 2.1 has already been released (before knowing that the index might be dropped in a future bugfix) there is no chance to add this feature to the upgrade procedure of version 2.1.
Liquibase will handle the drop index change across branches fine, but since you are going from a version that contains code (a drop index change) to one that does not expect that you are going to end up with your broken app state.
With liquibase, changes are completely independent of each other and independent of any versioning. You can think of the liquibase changelog as an ordered list of changes to make that each have a unique identifier. When you do an update, liquibase checks each change in turn to see if it has been ran and runs it if it has not.
Any "versioning" is purely within your codebase and branching scheme, liquibase does not care.
Imagine you start out with your 1.1.0 release that looks like:
change a
change b
change c
when you deploy 1.1.0, the customer database will know changes a,b, and c were ran.
You have v2.1 with new changesets to the end of your changelog file, so it looks like:
change a
change b
change c
change x
change y
change z
and all 2.1 customers database know that a,b,c,x,y,z are applied.
When you create 1.1.0.1 with changeset d that drops your index, you end up with this changelog in the 1.1.0.1 branch:
change a
change b
change c
change d
But when you upgrade your 1.1.0.1 customers to 2.1, liquibase just compares the defined changesets of (a,b,c,x,y,z) against the known changesets of (a,b,c,d) and runs x,y,z. It doesn't care that there is an already ran changeset of d, it does nothing about that.
The liquibase diff support can be used as a bit of a sanity check and would be able to report that there is a missing index compared to some "correct" database, but that is not something you would normally do in a production deployment scenario.
The answer may be a bit late, but I will share my experience. We also came across the same problem in our project. We dealt with it in the next way:
Since releases in our project were not made often, we marked each changeset in liquibase particular context. The value was the exact version migration (like v6.2.1-v6.2.2). We passed value to liquibase though jndi properties, so customer was able to specify them. So during upgrade customer was responsible for setting right value for migration scope for each upgrade. Liquibase context can accept list of values. So in the end, context looked like this:
context=v5.1-5.2,v5.3-5.3.1,v5.3.1-5.4,v6.2.1-v6.2.2

build script - how to do it

About 2 months ago I overtook building proccess in current company. Even though I don't have much knowledge of it, I was the only with enough time, so I didn't have much choice.
Situation is not that good, and I would like to do following:
Labeling files in SourceSafe with version (example ProjectName PV 1.2)
GetFiles from SourceSafe to specific directory
Build vb6/c++/c# projects(yes, there are all kinds of them)
Build InstallShield setups
This is for now partly done using batch scripts(one for labeling and getting, one for building, etc..). So when building start I pretty much have babysit it.
Good part of this code could be reused.
Any recommendations on how to do it better? One big problem is whole bunch of dependencies between projects. Also labeling has to increment version and if necessary change PV to EV.
I would like to minimize user interaction as much as possible. One click on one build script(Spolsky is god) and all is done, no need to increment version, to set where to get files and similar stuff.
Is the batch scripting best way to go? Should I do some functionality with msbuild. Are there any other options?
Specific code is not need, for now I just need a way how to improve it, even though it wouldn't hurt.
Tnx,
Marko
Since you already have a build system (even though some of it currently "manual"), whatever you do, don't start over from scratch.
(1) Make sure you have a test machine (or Virtual Machine) on which to work. Thus you can make changes and improvements without having to worry about breaking anything.
(2) Put all of your build scripts and tools in version control, not just the source code. Then as you make changes, see if they work. If they do, then save them to version control. If they don't, then roll them back.
(3) Choose one area to work on at a time. Don't try to do everything at once. Going from a lot of manual work to "one-click" will take time no matter what build system you're working with.
Sounds like you want a continuous integration solution, like CC.Net. It has configuration options to do all the things you want and a great community to answer questions.
Also, batch scripting is probably not a good option. Sophisticated build and integration tools will let you feed parameters into the build and create different builds for different environments (test, production, etc.). Batch scripting will involve a lot of hand-coding and glue.

Check sql script valid

As part of a release we run a load of PL/SQL scripts against a database. Recently someone left the ; off the end of a line in one script that was called another script so this meant that script did not get run. Because this did not cause an error, it just didn't get run, it took quite a while to track down what had happened.
I want to check the scripts before they are run for lines in them that are missing either a ; at the end or a / on the line after. This is made more complicated as 'lines' in the script could actually span more than one line if it is statement or block of code.
To me this seems like to do this I'm going to have to parse the scripts then check they meet the above.
I've found ANTLR and wonder if this might be a way to do it since there seem to be existing PL/SQL grammars but looks like that's going to be a step learning curve for what's just a simple check.
Does anyone know an easy way or any other tools, eclipse plugins etc that I can use to check for lines in the scripts that are missing either a ; at the end or a / on the line after?
Update
We already do most of the stuff Tom H suggested. The scripts are run into our test server and we have a version table that gets updated at the end. The problem was that the missing semi-colon in the container script meant one script did not get run but the rest including the one to update the version number ran without errors. Therefore the problem only got picked up quite a way into testing. This needed the database restored before running the scripts with the missing semi-colon added so basically resulted in half a day of testing time being lost. If there was a simple way to check this before running the scripts into the test server it could save quite a bit of time.
I agree with MattH that you may be going about this the wrong way. I would just add an insert statement to the end of all of your scripts which insert a "version" row into a table in the database. At the end of your deployment scripts it's then an easy task to check that the version table has all of the correct rows in it.
Also, you should have all of your release scripts being run exactly as they will be in production against your QA server. That's where all of the testing takes place. You never do anything to the server besides what is in your release steps - you only run the release scripts and if those release scripts are ever changed then you refresh the QA server with them and redo testing.
When you go to production your release process has then been fully tested. As a fail safe measure you can also use tools like Red Gate's SQL Compare and SQL Data Compare to check that production matches the QA server. The data compare would only be against certain tables (look-up tables, etc.). If you have data changes to major tables (1M rows, etc.) then you can right a custom script to check that they are correct.
Even if the scripts are different for every release (and not part of a defined source control structure that creates or replaces database objects) I would adopt a practice of breaking the scripts down into the most fundamental units of work per file and deploying them through Ant with the standard sql task. You probably have these types of scripts:
CREATE or REPLACE dbobject...
SQL DML scripts
Anonymous PL/SQL blocks
If you standardize on a consistent statement delimiter (I suggest using "/" since it works with all of the cases above) and set the deployment to fail on error, then Ant will either deploy all of the files or indicate why it couldn't.
I think it would be very difficult to otherwise parse files of one or more SQL and/or PLSQL statements and find missing delimiters if there are no standards on delimiter choice or statements per file.
Just a thought, but are you going about this the wrong way?
I assume, at the file-level, the lack of a semi colon in the file was not a problem? but it only became a problem when run via the batch processing? If that's the case maybe you can change your batch processing to cope with this.
If it was the file, then testing should have picked it up. You don't want to parse your input files to make sure they compile etc.

SQL Server Version Updating Tables

I am part of a software development company looking for a good way to update my SQL Server tables when I put out a new version of the software. I know the answer is to probably use scripts in one form or another.
I am considering writing my own .NET program that runs the scripts to make it a bit easier and more user-friendly. I was wondering if there are any tools out there along those lines. Any input would be appreciated.
Suggest you look at Red_gate's SQlCompare
What kind of product are you using for your software installation? Products like InstallShield often now include SQL steps as an option for part of your install script.
Otherwise, you could look at using isql/osql to run your script from the command line through a batch file.
One of the developers where I'm currently consulting wrote a rather nifty SQL installer. I'll ask him when he gets in how he went about it.
I am using Red Gate's SQL Compare all the time. Also you need to make sure to provide a rollback script in case you need to go back to the previous version.
Have a look at DB Ghost Packager Plus.
Packages your source database and the compare and sync engine into a simple EXE for deployment. The installer EXE will automatically update any target schema to match the source on-the-fly at installation time.
Red Gate's SQL Compare to generate the change script, and Red Gate's Multi Script to easily send it to multiple SQL databases at the same time.