what´s the purpose of ProjectDependencies in VS-solution-file - msbuild

I read this post on the contents of a solution file, but still have no clue about the actual purpose of dependencies provided within a solution-file rather than within the project-file itself.
It seems there are two ways of having project 2 depending on project 1:
add a project-reference from p2 to p1. This will alter the csproj-file for p2 by introducing a ProjectReference to p1.csproj, but won´t change the solution, as far as I understand.
add an assembly-reference from p2 to p1. Thill will also alter the csproj-file by using a Reference to a compiled assembly (dll). However, it also adds a ProectDependency into the solution-file, which I do not understand. Why is this second entry within the solution needed in this case? Isn´t the assembly-reference provided within the csproj-file for p2 sufficient?

It's purely historical. The new project files don't really need it anymore, but the .sln file format predates msbuild and thus the solution file has some duplication.
It's used to define the build order, which becomes more important when you have ancient project types in your solution, as these won't be able to declare build order. It's also used to declare and validate build order between unrelated projects (e.g., project that don't reference each other), without the IDE having to load & parse all the projects.
Your second case is one of those cases where the solution file keeps track of build order. It then knows it needs to build P1 prior to P2. Without the solution level reference that information would be lost. It's quite clever that this is automatically detected and added, in the past you needed to manually define such build-order-dependencies.
At compile time the .sln is transformed into an msbuild file which then orchestrates the build. (see an example here). You can set an environment variable to generate yours.
<TL;DR>
The solution file is ancient and has some artefacts left over from pre msbuild. Some things just need to be there for 'reasons'.
If they were to build VS from scratch, the solution file would look very different.

Related

Adding a two new phases to an Xcode framework project

I am building a project on Github written in Objective-C. It resolves MAC addresses down to manufacturer details. The lookup table is currently stored as text file manuf.txt (from the Wireshark project), which is parsed at run-time, which is costly. I would prefer to compile this down to archived objects at build-time, and load that instead.
I would like to amend the build phases such that I:
Build a simple compiler
Run the compiler, parsing manuf.txt and outputting archived objects
Build the framework
Copy the archived objects into the framwork
I am looking for wisdom on how to achieve steps 1 and 2 using Xcode v7.3 as Xcode provides only a Copy Files phase or a Run Script phase. An example of other projects achieving similar goals would be inspiring.
I suspect that what you are asking is possible, but tricky. The reason is that you will need to write a bunch of class files and then dynamically add them to the project.
Firstly you will need to employ a run script phase to run various tools from the command line to parse your file and generate a number of class files from it. I would suggest looking into various templating engines. For example appledoc uses moustache templates to generate API documentation files. You could use the same technique to generate header and implementation files.
Next, rather than generating archived objects an trying to import into a framework. I think you may be better off generating raw source code, adding it to a project and compiling into a framework. Probably simpler in the long run.
To automatically include the generated code I would look into (which means I haven't actually tried this :-) adding a folder reference to the project rather than an Xcode group. Folder references are an option in the 'Add files to ...' dialog.
Folder references refer to a directory and automatically add the entire contents of that directory to a project. So you can use one to point to the directory where you have generated the source code. This is a much better option than trying to manipulate the project or injecting things into an established framework.
I would prefer to parse the file at runtime. After launch you can look for an already existing output, otherwise parse it one time.
However, I have to do something similar at Objective-Cloud. I simply added a run script build phase and put the compiler call into it.

Is is possible to pass a variable from the build process to Visual Basic code?

My goal is to create build definitions within Visual Studio Team Services for both test and production environments. I need to update 2 variables in my code which determine which database and which blob storage the environment uses. Up till now, I've juggled this value in a Resource variable, and pulled that value in code from My.Resources.DB for a library, and Microsoft.Azure.CloudConfigurationManager.GetSetting("DatabaseConnectionString") for an Azure worker role. However, changing 4 variables every time I do a release is getting tiring.
I see a lot of posts that get close to what I want, but they're geared towards C#. For reasons beyond my influence, this project is written in VB.NET. It seems I have 2 options. First, I could call the MSBuild process with a couple of defined properties, passing them to the .metaproj build file, but I don't know how to get them to be used in VB code. That's preferable, but, at this point, I'm starting to doubt that this is possible.
I've been able to set some pre-processor constants, to be recognized in #If-#Else directives.
#If DEBUG = True Then
BarStaticItemVersion.Caption = String.Format("Version: {0}", "1.18.0.xxx")
#Else
BarStaticItemVersion.Caption = String.Format("Version: {0}", "1.18.0.133")
#End If
msbuild CalbertNG.sln.metaproj /t:Rebuild /p:DefineConstants="DEBUG=False"
This seems to work, though I need to Rebuild to change the value of that constant. Should I have to? Should Build be enough? Is this normal, or an indication that I don't have something set quite right?
I've seen other posts that talk about pre-processing the source files with some other builder, like Ant, but that seems like overkill. It feels like I'm close here. But I want to zoom out and ask, from a clean sheet of paper, if you're given 2 variables which need to change per environment, you're using VB.NET, and you want to incorporate those variable values in an automated VS Team Services build process upon code check-in, what's the best way to do it? (I want to define the variables in the VSTS panel, but this just passes them to my builder, so I have to know how to parse the call to MSBuild to make these useful.)
I can control picking between 2 static strings, now, via compiler directives, but I'd really like to reference the Build.BuildNumber that comes out of the MSBuild process to display to the user, and, if I can do that, I can just feed the variables for database and blob container via the same mechanism, and skip the pre-processor.
You've already found the way you can pass data from the MsBuild Arguments directly into the code. An alternative is to use the Condition Attribute in your project files to make certain property groups optional, it allows you to even include specific files conditionally. You can control conditions by passing in /p:ConditionalProperty=value on the MsBuild command. This at least ensures people use a set of values that make sense together.
The problem is that when MsBuild is running in Incremental mode it is likely to not process your changes (as you've noticed), the reason for this, is that the input files remain unchanged since the last build and are all older than the last generated output files.
To by-pass this behavior you'd normally create a separate solution configuration and override the output location for all projects to be unique for that configuration. Combined with setting the Compiler constants for that specific configuration you're ensured that when building that Configuration/Platform combination, incremental builds work as intended.
I do want to echo some of the comments from JerryM and Daniel Mann. Some items are better stored in else where or updated before you actually start the compile phase.
Possible solutions:
Store your configuration data in config files and use Configuration Transformation to generate the right config file base don the selected solution configuration. The process is explained on MSDN. To enable configuration transformation on all project types, you can use SlowCheetah.
Store your ocnfiguration data in the config files and use MsDeploy and specify a Parameters.xml file that matches the deploy package. It will perform the transformation on deploy time and will actually allow your solution to contain a standard config file you use at runtime, plus a publish profile which will post-process your configuration. You can use a SetParameters.xml file to override the variables at deploy time.
Create an installer project (such as through Wix) and merge the final configuration at install time (similar to the MsDeploy). You could even provide a UI which prompts for specific values (and can supply default values).
Use a CI server, like the new TFS/VSTS 2015 task based build engine and combine it with a task that can search&replace tokens, like the Replace Tokens task, Tokenization Task, Colin's ALM Corner Build and Release Tasks. And a whole bunch that specifically deal with versioning. Handling these things in the CI server also allows you to do a quick build locally at all times and do these relatively expensive steps on the build server (patching source code breaks incremental build in MsBuild, because there are always newer input files.
When talking specifically about versioning, there are a number of ways to set the AssemblyVersion and AssemblyFileVersion just before compile time, usually it involves overriding the AssemblyInfo.cs file before compilation. Your code could then use reflection to read the value at runtime. You can use the AssemblyInformationalversion to specify something like you do in the example above which contains .xxx or other text. It also ensures that the version displayed always reflects the information obtained when reading the file properties through Windows Explorer.

In cmake, what is a "project"?

This question is about the project command and, by extension, what the concept of a project means in cmake. I genuinely don't understand what a project is, and how it differs from a target (which I do understand, I think).
I had a look at the cmake documentation for the project command, and it says that the project command does this:
Set a name, version, and enable languages for the entire project.
It should go without saying that using the word project to define project is less than helpful.
Nowhere on the page does it seem to explain what a project actually is (it goes through some of the things the command does, but doesn't say whether that list is exclusive or not). The cmake.org examples take us through a basic build setup, and while it uses the project keyword it also doesn't explain what it does or means, at least not as far as I can tell.
What is a project? And what does the project command do?
A project logically groups a number of targets (that is, libraries, executables and custom build steps) into a self-contained collection that can be built on its own.
In practice that means, if you have a project command in a CMakeLists.txt, you should be able to run CMake from that file and the generator should produce something that is buildable. In most codebases, you will only have a single project per build.
Note however that you may nest multiple projects. A top-level project may include a subdirectory which is in turn another self-contained project. In this case, the project command introduces additional scoping for certain values. For example, the PROJECT_BINARY_DIR variable will always point to the root binary directory of the current project. Compare this with CMAKE_BINARY_DIR, which always points to the binary directory of the top-level project. Also note that certain generators may generate additional files for projects. For example, the Visual Studio generators will create a .sln solution file for each subproject.
Use sub-projects if your codebase is very complex and you need users to be able to build certain components in isolation. This gives you a very powerful mechanism for structuring the build system. Due to the increased coding and maintenance overhead required to make the several sub-projects truly self-contained, I would advise to only go down that road if you have a real use case for it. Splitting the codebase into different targets should always be the preferred mechanism for structuring the build, while sub-projects should be reserved for those rare cases where you really need to make a subset of targets self-contained.

MSBuild - Project Dependencies Build Order Anomaly

We have an MSBuild .proj file that is used to build and run lots of projects. That have file references to one another. Due to the fact that it is file references, we maintain the list of Project Dependencies on the solution, so that MSBuild is able to identify the order to execute projects in.
We had an issue recently, where our US development team added a new reference to one project, but didn't update the dependencies list. This builds fine for them locally, and on the server. However for the UK team, the build consistently failed for all developers.
I'm trying to understand why this was the case. The only thing I can think of is that our Culture causes the build order to be slightly different when two projects are essentially ranked at the same level. i.e. If Project A and B are perceived to have the same dependencies, then the order the two build in is arbitrarily determined, and this might be different in the UK culture vs the US culture.
That said, my machine (it broke for me) has the Region Format set the English US. And Location UK.
Does this sound feasible? or is there a better explanation for this?
Changing the culture will not fix the issue where you're making a file reference on a components output rather than a project reference to the project that will build the assembly.
Solution files do map dependencies and would help MsBuild determine build order. When MsBuild processes a solution it will first convert it in memory to MsBuild xml format, then it will determine project order by listing the dependencies, then determine if a project needs to be rebuilt by comparing the last update time of the input files to the output files.
When you have a project reference within a project (in a project, open the References folder, remove references to output assemblies and instead reference the project itself). This will modify the project file replacing Reference item groups with ProjectReference item groups. This allows MsBuild to make this determination on a project level and resolves the issue of having to manually configure the project build order.

Does MSBuild know if a project needs to be recompiled?

First, I have a base assumption from watching Visual Studio compile things with its default .*proj files that, if you build the same solution twice in a row, it detects that nothing has changed and seems to fly through the solution build. Does this mean it knows that nothing was changed in a project and doesn't have to make a new DLL output?
If that's the case, I have a question. Say I have a solution with multiple class libraries, and an MSBuild task in each project that automatically increments the build's version by modifying AssemblyInfo.cs. Thing is (if my previous assumption is correct) it does this every time and triggers a new rebuild of each class library. Is there a target or property in MSBuild that can tell if the project needs recompilation, and skip my versioning step if so?
I ask because let's say I update project A, but not project B in a solution. If I run a build on the solution, I want it to update the version on project A, but since project B hasn't changed, I want to leave it alone.
Found something: http://msdn.microsoft.com/en-us/library/ms171483.aspx
MSBuild can compare the timestamps of
the input files with the timestamps of
the output files and determine whether
to skip, build, or partially rebuild a
target. In the following example, if
any file in the #(CSFile) item
collection is newer than the hello.exe
file, MSBuild will run the target;
otherwise it will be skipped:
<Csc
Sources="#(CSFile)"
OutputAssembly="hello.exe"/> </Target>
...that worked. But then got me thinking, what if someone pulls the code down from source control without the assemblies (which is how we do it)? Since it has no output to compare against, it'll do a compile and increment the version anyway. I think the complexities might lead me to abandon this approach.
It doesn't really matter if you increment on a developers box - what's important is that your daily/CI build is only incremented when needed. So, what I've done in the past is have some small XML file contain the next build number, and have an MSBuild task take this xml file and create a file called Version.cs (containing the versioning attributes you'd usually find in AssemblyInfo.cs).
Version.cs is never checked into your soure control - it's generated by the build.
Developers will sync the current XML file, build their binaries, and get the current version number. The continous integration build may also do the same thing. But a daily/official build will check out the XML file, increment the version information, and then check it in. From that moment on the version number has officially changed.
There are variations on this theme, but the general idea works.