Is it possible to override the behavior of a merge module - wix

Supposing I have a merge module that installs a file "MyFile.txt" to a certain location, and that I wish to use that merge module, however I want to supply a different copy of "MyFile.txt" from the one supplied with the merge module.
Is it possible to do this? (And for bonus points how can I do this using Wix)
Update: Roughly speaking MyFile.txt is part of a package up component of installable items that we provide to others, they then comine these components with their own to produce an installer.
In the ideal world they would only need to add new files to the output, however this is a replacement for an existing system where they currently have the ability to modify or even replace items (suce as MyFile.txt) in the end installer, and so without the ability to do the same with the merge module the migration path will be difficult.
The packaged up component doesn't need to be a merge module if there is a better solution, however merge modules seemed like the sensible choice and in all other respects provide a very nice re-usable package of installer logic.

It's possible but every technique that I know is a bit of a hack and doesn't scale very well. Can you tell me more about what type of file MyFile.txt is and what the intent of the different flavors of the file? Usually my goal is to never have the same filename twice ( darn component rules ) and then design variation points to support the needs. Sometimes upstream changes to the application are required to do this correctly.

Related

Optionally leave old version of component on upgrade

I've been trying to set up a WiX component such that the user can specify that the installer should not upgrade that component on a MajorUpgrade. I had the following code, but this means that if the condition is met then the new version is not installed, but the old version is also removed.
<Component Id="ExampleComponent" GUID="{GUID here}">
<Condition>NOT(KEEPOLDFILE="TRUE")</Condition>
<File Id="ExampleFile" Name="File.txt" KeyPath="yes" Source="File.txt"/>
</Component>
Ideally, if the user specifies "KEEPOLDFILE=TRUE", then the existing version of "File.txt" should be kept. I've looked into using the Permanent attribute, but this doesn't look relevant.
Is this possible to achieve without using CustomActions?
A bit more background information would be useful, however:
If your major upgrade is sequenced early (e.g. afterInstallInitialize) the upgrade is an uninstall followed by a fresh install, so saving the file is a tricky proposition because you'd save it, then do the new install, then restore it.
If the upgrade is late, then file overwrite rules apply during the upgrade, therefore it won't be replaced anyway. You'd need to do something such as make the creation and modify timestamps identical so that Windows will overwrite it with the new one. The solution in this case would be to run a custom action conditioned on "keep old file", so you'd do the reverse of this:
https://blogs.msdn.microsoft.com/astebner/2013/05/23/updating-the-last-modified-time-to-prevent-windows-installer-from-updating-an-unversioned-file/
And it's also not clear if that file is ALWAYS updated, so if in fact it has not been updated then why bother to ask the client whether to keep it?
It might be simpler to ignore the Windows Installer behavior by setting its component id to null, as documented here:
https://msdn.microsoft.com/en-us/library/windows/desktop/aa368007(v=vs.85).aspx
Then you can do what you want with the file. If you've already installed it with a component guid it's too late for this solution.
There are better solutions that require the app to get involved where you install a template version of this file. The app makes a copy of it that it always uses. At upgrade time that template file is always replaced, and when the app first runs after the upgrade it asks whether to use the new file (so it copies and overwrites the one it was using) or continue to use the existing file. In my opinion delegating these issues to the install is not often an optimal solution.
Setting attributes like Permanent is typically not a good idea because they are not project attributes you can turn on and off on a whim - they apply to that component id on the system, and permanent means permanent.
I tried to make this a comment, it became to long. I prefer option 4 that Phil describes. Data files should not be meddled with by the setup, but managed by your application exe (if there is one) during its launch sequence. I don't know about others, but I feel like a broken record repeating this advice, but hear us out...
There is a description of a way to manage your data file's overwriting or preservation here. Essentially you update your exe to be "aware" of how your data file should be managed - if it should be preserved or overwritten, and you can change this behavior per version of your application exe if you like. The linked thread describes registry keys, but the concept can be used for files as well.
So essentially:
Template: Install your file per-machine as a read-only template
Launch Sequence: Copy it in place with application.exe launch sequence magic
Complex File Revision: Update the logic for file overwrite or preservation for every release as you see fit along the lines as the linked thread proposes
Your setup will "never know" about your data file, only the template file. It will leave your data file alone in all cases. Only the template file it will deal with.
Liberating your data files from the setup has many advantages:
Setup.exe bugs: No unintended accidental file overwrites or file reset problems from problematic major upgrade etc... this is a very common problem with MSI.
Setup bugs are hard to reproduce and debug since the conditions found on the target systems can generally not be replicated and debugging involves a lot of unusual technical complexity.
This is not great - it is messy - but here is a list of common MSI problems: How do I avoid common design flaws in my WiX / MSI deployment solution? - "a best effort in the interest of helping sort of thing". Let's be honest, it is a mess, but maybe it is helpful.
Application.exe Bugs: Keep in mind that you can make new bugs in your application.exe file, so you can still see errors - obviously. Bad ones too - if you are not careful - but you can easily implement a backup feature as well - one that always runs in a predictable context.
You avoid the complicated sequencing, conditioning and impersonation concerns that make custom actions and modern setups so complicated to do right and make reliable.
Following from that and other, technical and practical reasons: it is much easier to debug problems in the application launch sequence than bugs in your setup.
You can easily set up test conditions and test them interactively. In other words you can re-create problem conditions easily and test them in seconds. It could take you hours to do so with a setup.
Error messages can be interactive and meaningful and be shown to the user.
QA people are more familiar with testing application functionality than setup functionality.
And I repeat it: you are always in the same impersonation context (user context) and you have no installation sequence to worry about.

Is is possible to pass a variable from the build process to Visual Basic code?

My goal is to create build definitions within Visual Studio Team Services for both test and production environments. I need to update 2 variables in my code which determine which database and which blob storage the environment uses. Up till now, I've juggled this value in a Resource variable, and pulled that value in code from My.Resources.DB for a library, and Microsoft.Azure.CloudConfigurationManager.GetSetting("DatabaseConnectionString") for an Azure worker role. However, changing 4 variables every time I do a release is getting tiring.
I see a lot of posts that get close to what I want, but they're geared towards C#. For reasons beyond my influence, this project is written in VB.NET. It seems I have 2 options. First, I could call the MSBuild process with a couple of defined properties, passing them to the .metaproj build file, but I don't know how to get them to be used in VB code. That's preferable, but, at this point, I'm starting to doubt that this is possible.
I've been able to set some pre-processor constants, to be recognized in #If-#Else directives.
#If DEBUG = True Then
BarStaticItemVersion.Caption = String.Format("Version: {0}", "1.18.0.xxx")
#Else
BarStaticItemVersion.Caption = String.Format("Version: {0}", "1.18.0.133")
#End If
msbuild CalbertNG.sln.metaproj /t:Rebuild /p:DefineConstants="DEBUG=False"
This seems to work, though I need to Rebuild to change the value of that constant. Should I have to? Should Build be enough? Is this normal, or an indication that I don't have something set quite right?
I've seen other posts that talk about pre-processing the source files with some other builder, like Ant, but that seems like overkill. It feels like I'm close here. But I want to zoom out and ask, from a clean sheet of paper, if you're given 2 variables which need to change per environment, you're using VB.NET, and you want to incorporate those variable values in an automated VS Team Services build process upon code check-in, what's the best way to do it? (I want to define the variables in the VSTS panel, but this just passes them to my builder, so I have to know how to parse the call to MSBuild to make these useful.)
I can control picking between 2 static strings, now, via compiler directives, but I'd really like to reference the Build.BuildNumber that comes out of the MSBuild process to display to the user, and, if I can do that, I can just feed the variables for database and blob container via the same mechanism, and skip the pre-processor.
You've already found the way you can pass data from the MsBuild Arguments directly into the code. An alternative is to use the Condition Attribute in your project files to make certain property groups optional, it allows you to even include specific files conditionally. You can control conditions by passing in /p:ConditionalProperty=value on the MsBuild command. This at least ensures people use a set of values that make sense together.
The problem is that when MsBuild is running in Incremental mode it is likely to not process your changes (as you've noticed), the reason for this, is that the input files remain unchanged since the last build and are all older than the last generated output files.
To by-pass this behavior you'd normally create a separate solution configuration and override the output location for all projects to be unique for that configuration. Combined with setting the Compiler constants for that specific configuration you're ensured that when building that Configuration/Platform combination, incremental builds work as intended.
I do want to echo some of the comments from JerryM and Daniel Mann. Some items are better stored in else where or updated before you actually start the compile phase.
Possible solutions:
Store your configuration data in config files and use Configuration Transformation to generate the right config file base don the selected solution configuration. The process is explained on MSDN. To enable configuration transformation on all project types, you can use SlowCheetah.
Store your ocnfiguration data in the config files and use MsDeploy and specify a Parameters.xml file that matches the deploy package. It will perform the transformation on deploy time and will actually allow your solution to contain a standard config file you use at runtime, plus a publish profile which will post-process your configuration. You can use a SetParameters.xml file to override the variables at deploy time.
Create an installer project (such as through Wix) and merge the final configuration at install time (similar to the MsDeploy). You could even provide a UI which prompts for specific values (and can supply default values).
Use a CI server, like the new TFS/VSTS 2015 task based build engine and combine it with a task that can search&replace tokens, like the Replace Tokens task, Tokenization Task, Colin's ALM Corner Build and Release Tasks. And a whole bunch that specifically deal with versioning. Handling these things in the CI server also allows you to do a quick build locally at all times and do these relatively expensive steps on the build server (patching source code breaks incremental build in MsBuild, because there are always newer input files.
When talking specifically about versioning, there are a number of ways to set the AssemblyVersion and AssemblyFileVersion just before compile time, usually it involves overriding the AssemblyInfo.cs file before compilation. Your code could then use reflection to read the value at runtime. You can use the AssemblyInformationalversion to specify something like you do in the example above which contains .xxx or other text. It also ensures that the version displayed always reflects the information obtained when reading the file properties through Windows Explorer.

How can I handle platform-specific modules in Go?

I'm writing a command-line utility in Go that (as part of its operation) needs to get a password from the user. There's a great gopass module for Unix that does this, and I know how to write one for the Windows console. The problem is that the Windows module obviously won't build on *nix, and the *nix version won't build on Windows. Since Go lacks any preprocessor support (as far as I can tell), I have absolutely no idea what the right way to approach this is. I know it's possible, since Go itself must do this for its own libraries, but the tooling I'm used to (conditional imports/preprocessors/etc.) seems to be missing.
Go has build constraints, which can either be specified as comments in a .go file, or as part of the file name.
One set of constraints is for target operating system, so you can have one file for Windows, one for e.g. Linux and implement the same function in two different ways in the two.
More information on build constraints are at http://golang.org/pkg/go/build/#hdr-Build_Constraints

What is the MSI component generation best practice?

Visual Studio Installer states that it is a best practice to install each file as an installer component. The heat utility provided with Wix also seems to follow the practice of putting every file in its own component.
InstallShield's component wizard uses InstallShield's setup best practice of placing portable executable files in their own component but groups all other files (e.g. unversioned files) by the common destination folder.
The advantage of practice one (each file in its own component) is that each file is set up as a key file which is important if you want these files to trigger repairs. It also allows automation of creating the components (e.g. heat) easier since you are creating a component for each file.
The disadvantages of practice one include the overhead of managing so many components and the bloating of the registry after the application is installed.
An advantage of practice 2 could be seen in an install that installs hundreds of graphics files to one directory. If you do not care about repair functionality, is there any reason to create hundreds of components for this install?
These 2 different practices are conflicting and I want to know which one that people actually use and why.
I always use the Microsoft approach (something similar to what InstallShield does):
http://msdn.microsoft.com/en-us/library/aa368269(VS.85).aspx
I think it's the best because:
- important files (EXE, DLL etc.) have their own component, so they can be repaired easily
- resource files are grouped together
- it allows an optimum components count (not too many to get a long install, but enough to allow an easy repair)
I also noticed that most commercial setup authoring tools use this approach.
I've written about this in the past and I'll try to find a link to it. I think you already understand the question and it's just time for you to decide what is important to you.
For me, I work on installs with 15,000+ files and we only service with major upgrades. For "Program Executables" we follow 1:1 principals ( a must for COM, Services, ShortCuts and so on anyways ) but for content/data files we actually do a 1 to many with no key file approach to cut down on our number of components. Sure, that means we won't be able to create an MSP that services just one or two content files here and there but for our business needs that's simply not important to us.
Resilency was a bit of a 4 letter word to us so having less key files makes us happier anyways. :-) BTW, VDPROJ also makes every registry key a keyfile of it's own component and that was quite painful for us triggering unneeded repairs.
All of this aside, for anyone who doesn't fully understand all of this, I'd stick to the 1:1 pattern until you come across a situation where you don't want to anymore and you understand the impact of making that choice.

How to determine where, or if, a variable is used in an SSIS package

I've inherited a collection of largely undocumented ssis packages. The entry point package (ie: the one that forks off in a variety of directions to call other packages) defines a number of variables. I would like to know how these variables are being used, but there doesn't seem to be an equivalent of "right click/Find All References"
Is there a reliable way to determine where these variables are being used?
A hackish way would be to open the dtsx file in a text editor/xml viewer and search for the variable name.
If it's being used in expressions, it should show it and you can trace the xml tree back up until you find the object it's being used on.
You can use the bids helper add-in thats gives you visual feedback on where variables are used in your package. Thats makes it very fast and easy to detect them.Besides that, it offers several other valueable features.
Check out: http://bidshelper.codeplex.com/