MSBuild for speficic changesets- more than one changeset ! - msbuild

We are using MsBuild on TFS 2008 for building our solutions.
I need your advice and help about below scenario.
For example :
We prepared a full build for one of our customers.
After package is get ready, 2 developers want to add their development to the package.
I am trying to find a solution to add only the 2 developers code checkins to new build.
I mean i need to specify changeset numbers for MsBuild as parameter.
for example, i only want to start a build for changeset number 200,400 and 434. 3 of them must be included the getting source process.
İ found a solution like /p:GetVersion:C1800
http://blogs.msdn.com/b/granth/archive/2008/06/26/how-to-make-team-build-get-a-previous-version.aspx
but this gets only one changeset. i need to specify more than one changeset number.
Or my approach can be totaly wrong, do you have any suggestions, guidance?
Thanks a lot,
Fatih.

If TFS Get command gets all the changesets up to your specified changeset. So if you want to include 200, 400 and 434, you only need to specify 434 as the changeset you want to get at. Note that this will also get all other changesets that are newer your workspace's version and older than 434.
I don't think TFS allows you to get just a specific changeset, unless you also specify the items you want to get. What I meant is, if changeset 123 includes files A, B, and C, and you just want to get this 123 changeset (and nothing else), you need to do something like
"tf get A;123"
"tf get B;123"
"tf get C;123"
You can find more information about the Get command here.
UPDATE:
Just found out that the tfpt tool can get a specific changeset:
Usage: tfpt getcs
/changeset:changesetnum [/force]
[/overwrite]
Gets only the files in a particular
changeset at the version of the
changeset.
/changeset Specifies the
number of the changeset to get.
/force Same as tf get
/force /overwrite Same as
tf get /overwrite

Related

Why does WiX Torch giving me an error code 0279?

I have a problem when building .wix MST difference file. I get the following error:
"The table definition of target database does not match the table definition updated database. A transform requires that the target database schema match the update database schema".
I tried finding solution on internet for almost 2 hours but no luck. I know It is probably caused by difference in MSI tables schema but I have no idea how to fix that.

variable for SQLLOGDIR not found

I am using ola hallengren script for maintenance solution. When I run just the Database backup job for user database I get the following error. Unable to start execution of step 1 (reason: Variable SQLLOGDIR not found). The step failed.
I have checked the directory permissions and there is no issue there. The script creates the job with no problem. I get error message when I try to run the job.
I had this same issue just the other day. I run a number of 2017 servers but the issue happened when I started running on a 2012 server.
I've dropped Ola a mail to confirm but best I can make out is that the SQLLOGDIR parameter specified in the 'advanced' tab for the step (for logging outputs) is not compatible with 2012 and maybe below 2017 though I have not tested these.
HTH,
Adam.
You need to replace this part in the advanced tab with the job name for example :
$(ESCAPE_SQUOTE(JOBNAME)) replace it with CommandLogCleanup_$(ESCAPE_SQUOTE(JOBID)) so then it will look like this:
$(ESCAPE_SQUOTE(SQLLOGDIR))\CommandLogCleanup_$(ESCAPE_SQUOTE(JOBID))_$(ESCAPE_SQUOTE(STEPID))_$(ESCAPE_SQUOTE(DATE))_$(ESCAPE_SQUOTE(TIME)).txt
instead of this:
$(ESCAPE_SQUOTE(SQLLOGDIR))\$(ESCAPE_SQUOTE(JOBNAME))_$(ESCAPE_SQUOTE(STEPID))_$(ESCAPE_SQUOTE(DATE))_$(ESCAPE_SQUOTE(TIME)).txt
Do this for all the other jobs if you don't want to recreate them.
I had the same issue on my SQL Server 2012 version, the error was during the dB backup using Ola's scripts, as mentioned above the issue is with the output file, I changed the location and the output file from the SQL Job and reran the job successfully (refer the attached screenshot for reference.
The error is related to the job output file.
When you create a maintenance job using the Ola script it will automatically assign output file to the step. Sometimes the location does not exist on the server.
I faced the same issue, then I ran the integrity script manually on the server and it completed without error, then I found that the error is in job configuration.
I changed the job output file location and now job also running fine.
The trick is to build the string for the #output_file_name parameter element by element before calling the stored procedure. If you look into Olas code you will see that is exactly what he is doing.
I have tried to describe this in more detail in the post Add SQL Agent job step with tokens in output file name.

MSBuild property to allow for column type update?

Does there exist a MSBuild property that allows for in-place column type update? My team would like our MSBuild to work that way rather than dropping the table and recreating the updated version.
We're ok with data loss.
--Sarah
Trying /p:BlockOnPossibleDataLoss=False as per reading about this here:
https://dba.stackexchange.com/questions/139478/how-to-prevent-ssdt-publishing-from-dropping-columns

Why does git interpret sql files as binaries during a merge conflict?

I got the problem with resolving merge conflicts within sql files.
MenkeTTA#909086 MINGW64 //FILE0019 (master)
$ git pull
remote: Microsoft (R) Visual Studio (R) Team Services
remote: Found 5 objects to send. (5 ms)
Unpacking objects: 100% (5/5), done.
From https://***
d58a69b..4830c58 master -> origin/master
warning: Cannot merge binary files: example_StoredProcedure.sql (HEAD vs. 4830c5886d3e1eac5ac76d1d49496afb43f444c3)
Auto-merging WRR - example_StoredProcedure.sql
CONFLICT (content): Merge conflict in example_StoredProcedure.sql
Automatic merge failed; fix conflicts and then commit the result.
When the merge conflict is created git isn't creating a pre-merged file with the competing changes as in the usual structure:
/SQL-File/
<<<<<<< HEAD
competing change A
=======
competing change B
>>>>>>> branch-a
Git is treating both files as binaries – but only for the merge-conflict operations (normal merge without conflict works properly). I can choose my own version of the file or the pulled competing file from the remote as the new head for the next push.
I reproduced this conflict with a normal .txt file. Git is treating the merge conflict then as expected with creating one pre-merged file with both competing changes/commits where I can manually fix the code how I want to.
To make git recognize the sql files as text I added
.sql diff
to the .gitattributes file like it's described here. Does anyone know how I can make git to create a ordinary pre-merged file with both versions of the competing commits when working with sql files?
First, a quick note:
To make git recognize the sql files as text I added
.sql diff
to the .gitattributes file ...
The .gitattributes line should read *.sql diff (I've fixed the linked answer, which is on a question about getting git diff to treat the file as text). However, if the file really is text, you may want, or even need, *.sql text. Note: this will not help at all if the file is not text. If the file's content is UTF-16, it is not text to Git, at least.
Consider marking the file as example_StoredProcedure.sql text, i.e., not all .sql files, just this one particular file. I'm also curious to see whether just marking it diff suffices! Update, Nov 2019: apparently marking the file as diff is not sufficient, though I have not verified this myself.
(The difference is that the diff attribute tells Git how to show the file in git diff output,1 while the text tells Git that instead of using its built in guessing algorithm, it should, for all purposes, use the setting to decide whether the file is text. The guessing algorithm consists of scanning an initial chunk of the file's contents to see how many "text-like" characters there are vs "non-text-like" characters. Probably there should be a special allowance for UTF-8 Byte Order Markers at the top, but there isn't. Curiously, during filtering, there are explicit checks.)
1Well, it's actually more involved than just showing, but I think this is a good way to start thinking about the issues. Note that you can augment the diff setting with a driver. It's not clear to me how the low level file merge interacts with a diff driver and I do not have time to experiment with it right now.
Longer explanation
warning: Cannot merge binary files: example_StoredProcedure.sql (... vs ...)
tells us that you are correct, that Git is treating the three versions of example_StoredProcedure.sql as binary. (I see you added this output after the initial question; good thing, since it's the key!)
But why did I say three versions, when the line goes on to say:
HEAD vs. 4830c5886d3e1eac5ac76d1d49496afb43f444c3
Git is being a little lazy here: all merges involve three inputs, not just two. One of these is the one you supply explicitly—or, as in this case, git pull ran git merge and git pull itself supplied the big ugly hash ID 4830c5886d3e1eac5ac76d1d49496afb43f444c3.
The second input to a merge is always the current commit, aka HEAD. You normally get this by being on the branch in the first place: HEAD names the branch-name, the branch-name identifies the commit, and this is where you want the final merge commit to go, so it all fits together.
The third input—or internally, first; internally the "theirs" version is the third input—is one that Git computes for you, based on the HEAD and other or --theirs commits: Git walks through enough of the commit graph to find the best common ancestor commit.1 It's this common ancestor commit that determines which files need merging, and if a file does need merging, the built in merge driver needs to use diffs to get textual changes to merge. For both this and for git diff, Git has a differencing engine built in to it (modified from LibXDiff).
Hence Git can, in effect, run:
git diff --find-renames <merge base commit> HEAD
to see what we did to each of our files, and:
git diff --find-renames <merge base commit> <other commit>
to see what they did to each of our files. Then:
If we changed a file and they did not touch it at all, the merge is easy: take ours.
If they changed a file and we did not touch it at all, the merge is easy: take theirs.
If we both changed a file but made the new file exactly the same, the merge is easy: take either one (ours, really, since it's in place).
Otherwise, attempt to combine the changes.
For speed reasons, Git uses the hash IDs ("blob" hashes, for the file's content) to accomplish the first three bullet points without ever having to fire up the file-level diff. This can, and does, merge unconflicted binary-file changes. It's only the final stage, where all three blob hashes differ, that requires a textual diff so as to combine changes.
Obviously, if Git can't diff the file, it cannot merge the two diff outputs. But does just marking the file as text-diff-able (pattern diff in .gitattributes) make the merge proceed? What happens if you set a diff driver, does the low-level file merge code use that driver? It "wants" to use the xdiff internal interface to find hunks; that's a lot easier than interpreting text output from a driver; you probably have to define a merge driver to get a detected-as-binary file to be merged, even if you have marked it as diff.
Additional note, Nov 2019: Since Git 2.18, Git has the ability to convert between committed UTF-8 data and in-work-tree other-format data. To use this, set the working-tree-encoding attribute. For instance, [the gitattributes documentation] shows an example line:
*.ps1 text working-tree-encoding=UTF-16LE eol=CRLF
that would keep all *.ps1 files in UTF-8 internally (in the frozen, committed files inside each commit) but keep the useful-format versions of those files in your work-tree in UTF-16-LE. I have no data as to whether this would work with these SQL files.
1In all cases, but especially in problem cases where there's more than one best common ancestor, git merge's behavior actually depends on the strategy you chose. The usual recursive strategy will merge the merge bases, commit the result, and then use that commit as the merge base! Other merge strategies work differently.

executing a common sql file using liquibase

I have a situation to handle, i have my liquibase structured as per the best practices recommended. I have the change log xml structured as given below
Master XML
-->Release XML
-->Feature XML
-->changelog XML
In our application group, we run updateSQL to generate the consolidated sql file and get the changes executed through our DBA group.
However, the real problem I have is to execute a common set of sql statements during every iteration. Like
ALTER SESSION SET CURRENT_SCHEMA=APPLNSCHEMA
as the DBA executes the changes as SYSTEM but the target schema is APPLNSCHEMA.
How to include such common repeating statements in Liquibase changelog.
You would be able to write an extension (http://liquibase.org/extensions) that injects it in. If you need to do it per changeLog, it may work best to extend XMLChangeLogParser to automatically create and add a new changeSet that runs the needed SQL.
You could make a changeSet with the attribute 'runAlways' set to true and include the SQL.
As far as I know, there isn't a way to have Liquibase itself do this. I suggest that you wrap Liquibase with your favorite scripting language such that you run a command "generateSQLforThoseCrazyDBAs" that runs Liquibase and then prepends the SQL you need to the output created by Liquibase.