I've been doing a little dabbling with MSBuild before, but this is my first forray into partial builds. I've got everything to work, but not as well as I'd like. I cannot get around this message:
Skipping target "BuildLocalizerSetting" because it has no outputs.
Though the target has declared its outputs, the output specification only references empty properties and/or empty item lists.
If I delete the Inputs attribute from the target everything works fine. I suspect this has to do with missing transforms, but specifying a transform is not really possible.
My setup is slightly different from a typical build scenario. I have a manifest file, which contains a list of files which should be compiled. I.e. I go from a scenario where one input file generates a - potentially long - list of items in an item group.
I've build a custom task for parsing the manifest file, and it seems to do the task well. Although I did run into one snag, The TaskItem doesn't allow setting of certain wellknown metadata, like: "Filename", "Extension", "ModifiedTime", etc. The issue was quickly resolved by implementing ITaskItem in a custom TaskItem class. I know dirty checking works as it should, since MSBuild does detect that those items has beeen modified. But now I wonder... Do I risk having a build error on some odd platform due to that implementation?
But most importantly, Why can't MSBuild accept that the inputs have changed, and then conclude that the outputs have changed as well?
Every example I've found assumes either a one to one relationship between input and output, or a many to one, but in my scenario I go from one to many, can that be done?
Related
I read this post on the contents of a solution file, but still have no clue about the actual purpose of dependencies provided within a solution-file rather than within the project-file itself.
It seems there are two ways of having project 2 depending on project 1:
add a project-reference from p2 to p1. This will alter the csproj-file for p2 by introducing a ProjectReference to p1.csproj, but won´t change the solution, as far as I understand.
add an assembly-reference from p2 to p1. Thill will also alter the csproj-file by using a Reference to a compiled assembly (dll). However, it also adds a ProectDependency into the solution-file, which I do not understand. Why is this second entry within the solution needed in this case? Isn´t the assembly-reference provided within the csproj-file for p2 sufficient?
It's purely historical. The new project files don't really need it anymore, but the .sln file format predates msbuild and thus the solution file has some duplication.
It's used to define the build order, which becomes more important when you have ancient project types in your solution, as these won't be able to declare build order. It's also used to declare and validate build order between unrelated projects (e.g., project that don't reference each other), without the IDE having to load & parse all the projects.
Your second case is one of those cases where the solution file keeps track of build order. It then knows it needs to build P1 prior to P2. Without the solution level reference that information would be lost. It's quite clever that this is automatically detected and added, in the past you needed to manually define such build-order-dependencies.
At compile time the .sln is transformed into an msbuild file which then orchestrates the build. (see an example here). You can set an environment variable to generate yours.
<TL;DR>
The solution file is ancient and has some artefacts left over from pre msbuild. Some things just need to be there for 'reasons'.
If they were to build VS from scratch, the solution file would look very different.
I'm using add_custom_command() to generate some files. ninja clean removes them, as it should. One of the files is intended as a default/example implementation, to be modified by the user. It is only generated if it does not already exist. I would like for ninja clean not to remove this file.
I have tried a number of things but without success:
add_custom_target(): CMake complains about the missing file unless I name it in BYPRODUCTS, but doing this also leads to removal on clean
set_file_properties(... GENERATED FALSE) doesn't work because CMake complains about the file missing.
set_directory_properties() failed in a similar way: "folder doesn't exist or not yet processed" (it does exist)
I previously generated the example implementation and just let the user copy it or model their code on it. This works, but isn't entirely satisfactory. Is my use-case so unlikely that CMake doesn't support it?
I am afraid you requirment (conceptually, have make create something which make clean does not remove) is rather unusual. I can think of two potential solutions/workarounds.
One, move the file's generation to CMake time. That is, create it using execute_process() instead of add_custom_command(). This may or may not be possible, based on whether the file-generation process (the current custom command) depends on the rest of the build or not.
Two, totally hide the example file's existence from CMake. That is, have the custom command also generate some other file (maybe just a timestamp file) and have its driving custom target depend on that one instead. Do not list the example file as ither the custom command's dependency, output, or byproduct. That way, nothing will depend on it and neither CMake nor Ninja should not care whether it exists or not, so they will not complain or try to clean it up.
If it is an example for the user, it should not be in your build folder, but in the install folder. I don't see why you would need add_custom_command or the other commands you listed.
Therefore, you have to provide install() instructions.
You can then call make install. Cleaning will not remove those and only installing again will overwrite them if necessary.
For those, who come here a long time after the original question was asked (like me), I'll write my solution:
The tool called in add_custom_command generates two files with identical content:
one that is saved in sources, never mentioned anywhere
and one that's marked as byproduct, and then is depended on
So the first one is the file we wanted in the first place.
And the second one is actually used in build process, and gets deleted on clean.
For me the issue is that I actually want to save generated files in VCS so I can track changes. And this approach gives ne what I need.
My goal is to create build definitions within Visual Studio Team Services for both test and production environments. I need to update 2 variables in my code which determine which database and which blob storage the environment uses. Up till now, I've juggled this value in a Resource variable, and pulled that value in code from My.Resources.DB for a library, and Microsoft.Azure.CloudConfigurationManager.GetSetting("DatabaseConnectionString") for an Azure worker role. However, changing 4 variables every time I do a release is getting tiring.
I see a lot of posts that get close to what I want, but they're geared towards C#. For reasons beyond my influence, this project is written in VB.NET. It seems I have 2 options. First, I could call the MSBuild process with a couple of defined properties, passing them to the .metaproj build file, but I don't know how to get them to be used in VB code. That's preferable, but, at this point, I'm starting to doubt that this is possible.
I've been able to set some pre-processor constants, to be recognized in #If-#Else directives.
#If DEBUG = True Then
BarStaticItemVersion.Caption = String.Format("Version: {0}", "1.18.0.xxx")
#Else
BarStaticItemVersion.Caption = String.Format("Version: {0}", "1.18.0.133")
#End If
msbuild CalbertNG.sln.metaproj /t:Rebuild /p:DefineConstants="DEBUG=False"
This seems to work, though I need to Rebuild to change the value of that constant. Should I have to? Should Build be enough? Is this normal, or an indication that I don't have something set quite right?
I've seen other posts that talk about pre-processing the source files with some other builder, like Ant, but that seems like overkill. It feels like I'm close here. But I want to zoom out and ask, from a clean sheet of paper, if you're given 2 variables which need to change per environment, you're using VB.NET, and you want to incorporate those variable values in an automated VS Team Services build process upon code check-in, what's the best way to do it? (I want to define the variables in the VSTS panel, but this just passes them to my builder, so I have to know how to parse the call to MSBuild to make these useful.)
I can control picking between 2 static strings, now, via compiler directives, but I'd really like to reference the Build.BuildNumber that comes out of the MSBuild process to display to the user, and, if I can do that, I can just feed the variables for database and blob container via the same mechanism, and skip the pre-processor.
You've already found the way you can pass data from the MsBuild Arguments directly into the code. An alternative is to use the Condition Attribute in your project files to make certain property groups optional, it allows you to even include specific files conditionally. You can control conditions by passing in /p:ConditionalProperty=value on the MsBuild command. This at least ensures people use a set of values that make sense together.
The problem is that when MsBuild is running in Incremental mode it is likely to not process your changes (as you've noticed), the reason for this, is that the input files remain unchanged since the last build and are all older than the last generated output files.
To by-pass this behavior you'd normally create a separate solution configuration and override the output location for all projects to be unique for that configuration. Combined with setting the Compiler constants for that specific configuration you're ensured that when building that Configuration/Platform combination, incremental builds work as intended.
I do want to echo some of the comments from JerryM and Daniel Mann. Some items are better stored in else where or updated before you actually start the compile phase.
Possible solutions:
Store your configuration data in config files and use Configuration Transformation to generate the right config file base don the selected solution configuration. The process is explained on MSDN. To enable configuration transformation on all project types, you can use SlowCheetah.
Store your ocnfiguration data in the config files and use MsDeploy and specify a Parameters.xml file that matches the deploy package. It will perform the transformation on deploy time and will actually allow your solution to contain a standard config file you use at runtime, plus a publish profile which will post-process your configuration. You can use a SetParameters.xml file to override the variables at deploy time.
Create an installer project (such as through Wix) and merge the final configuration at install time (similar to the MsDeploy). You could even provide a UI which prompts for specific values (and can supply default values).
Use a CI server, like the new TFS/VSTS 2015 task based build engine and combine it with a task that can search&replace tokens, like the Replace Tokens task, Tokenization Task, Colin's ALM Corner Build and Release Tasks. And a whole bunch that specifically deal with versioning. Handling these things in the CI server also allows you to do a quick build locally at all times and do these relatively expensive steps on the build server (patching source code breaks incremental build in MsBuild, because there are always newer input files.
When talking specifically about versioning, there are a number of ways to set the AssemblyVersion and AssemblyFileVersion just before compile time, usually it involves overriding the AssemblyInfo.cs file before compilation. Your code could then use reflection to read the value at runtime. You can use the AssemblyInformationalversion to specify something like you do in the example above which contains .xxx or other text. It also ensures that the version displayed always reflects the information obtained when reading the file properties through Windows Explorer.
I'm trying to integrate custom dynamic analysis tools to CDash. Such as KWStyle, CppCheck and Visual Leak Detector.
I'v figured out that I need to generate a DynamicAnalysis.xml file and submit it to CDash, from CTest scripts.
I think I know how to run the external tool as a part of the ctest script.
Either by using these variables to change how ctest_memcheck() works
CTEST_MEMORYCHECK_COMMAND
CTEST_MEMORYCHECK_SUPPRESSIONS_FILE
CTEST_MEMORYCHECK_COMMAND_OPTIONS
or by running the tool from the execute_process() command.
But I'm a bit uncertain which one to use.
The main problem I think I have is, how can I extract errors from the output of the custom tool and include that information into the DynamicAnalysis.xml to submit?
The extreme solution i see is that i'd need to make a program that generates a valid DynamicAnalysis.xml file.
But the problem is that I don't know the syntax of the DefectList element in the XML file. I have found no answer from google and even the XML Schema for that file is unhelpful.
EDIT:
Looking at this:
http://www.cdash.org/CDash/viewDynamicAnalysis.php?buildid=987149
What draws my attention are the labels, especially the empty ones. I don't see how these would come from the DynamicAnalysis.xml file. Maybe it tracks any labels that have ever appearred? Can i create my own custom labels somehow?
Does CDash create the labels automatically, depending on the tool type? Does this block custom defect types?
I'm just guessing here, so the question is; can i create custom labels for my custom tool, just by generating a DynamicAnalysis.xml - file.
It occurred to me that the amount of different errors from CppCheck (static code analysis) is huge, compared to valgrind for instance. I'm not that certain that I should use the dynamic analysis. Maybe a custom build type (Continuous / Experimental / Nightly) thing would work better. Like this:
http://www.cdash.org/CDash/buildSummary.php?buildid=930174
I have no idea how to do this, i guess it requires meddling around with CDash code?
Which one would work better?
If you are using valgrind, you can simply set CTEST_MEMORYCHECK_COMMAND to the full path to valgrind, and ctest will generate the DynamicAnalysis.xml file for you from the valgrind output when you call ctest_memcheck.
The best way to understand the possible values that can appear in the DynamicAnalysis.xml file is to analyze the source code of CTest.
The file CMake/Source/CTest/cmCTestMemCheckHandler.cxx has the list of defect types in a variable named "cmCTestMemCheckResultLongStrings". Search through that file for references to that variable to see what the possible values are and how they are used to generate "<Defect/>" xml elements.
EDIT (for additional information):
You can also easily see what XML elements CDash is expecting by inspecting its source code. Specifically, the file "CDash/xml_handlers/dynamic_analysis_handler.php".
From what I'v learned so far, is that for a tool that runs on the tests made in the cmake script, the Dynamic Analysis is the thing.
For tools that run on the entire program, a custom Build.xml is the thing you need.
I found out that i can commit those files from the ctest_submit command by using the FILES parameter.
I also found out that you can add custom "build names" to the side of Continuous, Nightly, and others.
And that you can set the builds from certain machines to be automatically transferred under these.
The custom labels under DynamicAnalysis did come from somewhere in CDash, i can't remember where anymore.
What reasons could there be for the following strange behaviour, and how might I track down the issues?
We use a combination of make files and msbuild.
I have a project which needs to be strongly named. I was previously setting the snk to use in the project file like this:
<AssemblyOriginatorKeyFile>$(EnvironmentVariable)TheKeyName.snk</AssemblyOriginatorKeyFile>
where EnvironmentVariable was defined in the batch file that launched the shell for the build like this:
set EnvironmentVariable='SomePath'
and this worked ok. Now I need the string name key to be able to be changed, so it can be different on the dev machine and the release build server. There is a variable which exists to hold the full path to the strong name key file, called StrongNameKeyFile. This is defined in the msbuild environment, and if I put some text output in the targets or properties files that are included as part of the msbuild task which build the project then I can see that this StrongNameKeyFile points to the correct location. So I changed the csproj to have this instead:
<AssemblyOriginatorKeyFile>$(StrongNameKeyFile)</AssemblyOriginatorKeyFile>
but when I try and compile this is evaluating to empty and no /keyfile is specified during the build.
We also have variable defined in the make files and these can be accessed in the csproj as well. These are used to point to the locations of referenced dlls, so that they can be different on dev and build machines. I know that these are set as the references come out correctly and everything compiles, but if I try and use one of these variables in the AssemblyOriginatorKeyFile element then it evaluates to empty in that element, but works in the reference element.
Why might this be? Is AssemblyOriginatorKeyFile treated specially somehow? How can I go about tracking the cause of this down?
There's no good reason why this should happen - as you know it normally Just Works; it's likely to be something in the chain dropping it on the floor.
One thing to try is explicitly passing it via /p:StrongNameKeyFile=XX - that would eliminate environment variables and the correct propagation thereof from your inquiries.
Another potential thing is that something is clobbering the variable as the name is used vy something else?
Run with /v:diag and you'll get dumps of all the inputs and/or variables as they change.
Or if on V4, use the MSBuild Debugger
And buy the Hashimi et al MSBuild book