We're trying to migrate from current Ant build to Maven. In the current project, we've different properites files for each of the env say
qa.properties, prod.properties & dev.properties.
The property values present in these files, are used to replace wherever these properties are being referred through config files (present in src\main\resources\config ). The current Ant build process replaces all these properties which are being referred in config files with their corresponding value for the current build env.
I'm somewhat aware of the Profiles concept in maven. However, I'm not able to figure how to achieve this using maven.
Any help would be appreicated.
Thanks,
Prabhjot
There are several ways to implement this but they are all variations around the same features: combine profiles with filtering. A Maven2 multi-environment filter setup shows one way to implement such a setup (a little variation would be to move the filter declaration inside each profile).
See also
9.3. Resource Filtering
Related
My goal is to create build definitions within Visual Studio Team Services for both test and production environments. I need to update 2 variables in my code which determine which database and which blob storage the environment uses. Up till now, I've juggled this value in a Resource variable, and pulled that value in code from My.Resources.DB for a library, and Microsoft.Azure.CloudConfigurationManager.GetSetting("DatabaseConnectionString") for an Azure worker role. However, changing 4 variables every time I do a release is getting tiring.
I see a lot of posts that get close to what I want, but they're geared towards C#. For reasons beyond my influence, this project is written in VB.NET. It seems I have 2 options. First, I could call the MSBuild process with a couple of defined properties, passing them to the .metaproj build file, but I don't know how to get them to be used in VB code. That's preferable, but, at this point, I'm starting to doubt that this is possible.
I've been able to set some pre-processor constants, to be recognized in #If-#Else directives.
#If DEBUG = True Then
BarStaticItemVersion.Caption = String.Format("Version: {0}", "1.18.0.xxx")
#Else
BarStaticItemVersion.Caption = String.Format("Version: {0}", "1.18.0.133")
#End If
msbuild CalbertNG.sln.metaproj /t:Rebuild /p:DefineConstants="DEBUG=False"
This seems to work, though I need to Rebuild to change the value of that constant. Should I have to? Should Build be enough? Is this normal, or an indication that I don't have something set quite right?
I've seen other posts that talk about pre-processing the source files with some other builder, like Ant, but that seems like overkill. It feels like I'm close here. But I want to zoom out and ask, from a clean sheet of paper, if you're given 2 variables which need to change per environment, you're using VB.NET, and you want to incorporate those variable values in an automated VS Team Services build process upon code check-in, what's the best way to do it? (I want to define the variables in the VSTS panel, but this just passes them to my builder, so I have to know how to parse the call to MSBuild to make these useful.)
I can control picking between 2 static strings, now, via compiler directives, but I'd really like to reference the Build.BuildNumber that comes out of the MSBuild process to display to the user, and, if I can do that, I can just feed the variables for database and blob container via the same mechanism, and skip the pre-processor.
You've already found the way you can pass data from the MsBuild Arguments directly into the code. An alternative is to use the Condition Attribute in your project files to make certain property groups optional, it allows you to even include specific files conditionally. You can control conditions by passing in /p:ConditionalProperty=value on the MsBuild command. This at least ensures people use a set of values that make sense together.
The problem is that when MsBuild is running in Incremental mode it is likely to not process your changes (as you've noticed), the reason for this, is that the input files remain unchanged since the last build and are all older than the last generated output files.
To by-pass this behavior you'd normally create a separate solution configuration and override the output location for all projects to be unique for that configuration. Combined with setting the Compiler constants for that specific configuration you're ensured that when building that Configuration/Platform combination, incremental builds work as intended.
I do want to echo some of the comments from JerryM and Daniel Mann. Some items are better stored in else where or updated before you actually start the compile phase.
Possible solutions:
Store your configuration data in config files and use Configuration Transformation to generate the right config file base don the selected solution configuration. The process is explained on MSDN. To enable configuration transformation on all project types, you can use SlowCheetah.
Store your ocnfiguration data in the config files and use MsDeploy and specify a Parameters.xml file that matches the deploy package. It will perform the transformation on deploy time and will actually allow your solution to contain a standard config file you use at runtime, plus a publish profile which will post-process your configuration. You can use a SetParameters.xml file to override the variables at deploy time.
Create an installer project (such as through Wix) and merge the final configuration at install time (similar to the MsDeploy). You could even provide a UI which prompts for specific values (and can supply default values).
Use a CI server, like the new TFS/VSTS 2015 task based build engine and combine it with a task that can search&replace tokens, like the Replace Tokens task, Tokenization Task, Colin's ALM Corner Build and Release Tasks. And a whole bunch that specifically deal with versioning. Handling these things in the CI server also allows you to do a quick build locally at all times and do these relatively expensive steps on the build server (patching source code breaks incremental build in MsBuild, because there are always newer input files.
When talking specifically about versioning, there are a number of ways to set the AssemblyVersion and AssemblyFileVersion just before compile time, usually it involves overriding the AssemblyInfo.cs file before compilation. Your code could then use reflection to read the value at runtime. You can use the AssemblyInformationalversion to specify something like you do in the example above which contains .xxx or other text. It also ensures that the version displayed always reflects the information obtained when reading the file properties through Windows Explorer.
Is there any way to get the number of classes in a project or the complete workspace in Xcode?
A simple way to get a rough idea for a project is by checking the Compile Sources section of the project's Build Phases. The compile sources will list all source files (.m, .swift) and doesn't include any headers.
Assuming roughly one class per source file, this will give you a ballpark idea of how many classes there are in your project at a glance. Note that this doesn't include any embedded projects or frameworks.
You could use cloc which can also be installed via Homebrew: brew install cloc.
Cloc is an open source command line tool for counting lines of code, but it also provides the count of files grouped by file type. The simplest form is cloc <path-to-your-project-dir> but the output can be configured by parameters.
A more complex solution (IMHO too complex) is, using Sonarqube with an Objective C plugin. Sonarqube has a nice interface and many functions, but just for counting classes, it's way to much.
We are currently attempting to port a very (very) large project built with ant to maven (while also moving to svn). All possibilities are being explored in remodeling the project structure to best fit the maven paradigm.
Now to be more specific, I have come across classifiers and would like to know how I could use them to my advantage, while refraining from "classifier anti-patterns".
Thanks
from: http://maven.apache.org/pom.html
classifier: You may occasionally find a fifth element on the
coordinate, and that is the classifier. We will visit the classifier
later, but for now it suffices to know that those kinds of projects
are displayed as groupId:artifactId:packaging:classifier:version.
and
The classifier allows to distinguish artifacts that were built from
the same POM but differ in their content. It is some optional and
arbitrary string that - if present - is appended to the artifact name
just after the version number. As a motivation for this element,
consider for example a project that offers an artifact targeting JRE
1.5 but at the same time also an artifact that still supports JRE 1.4. The first artifact could be equipped with the classifier jdk15 and the
second one with jdk14 such that clients can choose which one to use.
Another common use case for classifiers is the need to attach
secondary artifacts to the project's main artifact. If you browse the
Maven central repository, you will notice that the classifiers sources
and javadoc are used to deploy the project source code and API docs
along with the packaged class files.
I think the correct question would be How to use or abuse attached artifacts maven? Because basicaly that is why classifiers are introduced - to allow you to publish attached artifacts.
Well, Maven projects often implicitely use attached artifacts, e.g. by using maven-javadoc-plugin or maven-source-plugin. maven-javadoc-plugin publishes attached artifact that contains generated documentation by using a javadoc classifier, and maven-source-plugin publishes sources by using sources classifier.
Now what about explicit usage of attached artifacts? I use attached artifacts to publish harness shell scripts (start.sh and Co). It's also a good idea to publish SQL scripts in the attached artifact with a classifier sql or something like that.
How can you attach an arbitary artifact with your classifier? - this can be done with build-helper-maven-plugin.
... I would like to know how I could use them to my advantage ...
Don't use them. They are optional and arbitrary.
If you are in the middle of porting a project over to maven, keep things simple and only do what is necessary (at first) to get everything working as you'd like. Then, after things are working like you want, you can explore more advanced features of maven to do cool stuff.
This answer is based on your question sounding like a "This features sounds neat, how can I use it even though I don't have a need for it?" kind of question. If you have a need for this feature, please update your question with more information on how you were thinking of utilizing the classifier feature and we will all be more informed to help you.
In contrast to Jesse Web's answer, it is good to learn about classifiers so that you can leverage them and avoid having to refactor code in addition to porting to maven. We went through the same process a year or two ago. Previously we had everything in one code base and built together with ant. In migrating to maven, we also found the need to break out the various components into their own maven projects. Some of these projects were really libraries, but had some web resources (jsp, js, images, etc.). The end result was us creating an attached artifact (as mentioned by #Male) with the web resources, using the classifier "web-resources" and the type "war" (to use as an overlay). This was then, and still does after understanding maven better, the best solution to port an old, coupled, project. We are eventually wanting to separate out these web resources since they don't belong in this library, but at least it can be done as a separate task.
In general, you want to avoid having attached artifacts. This is typically a sign that a separate project should be created to build that artifact. I suggest looking at doing this anytime you are tempted to attach an artifact with a separate classifier.
I use classifiers to define supporting artefacts to the main artefact.
For example I have com.bar|foo-1.0.war and have some associated config called com.bar|foo-1.0-properties.zip
You can use classifers when you have different versions of the same artifact that you want to deploy to your repository.
Here's a use case:
I use them in conjunction with properties in a pom. The pom has default values which can be overriden via the command line. Running without options uses the default property value. If I build a version of the artifact with different property values, I can deploy that to the repo with a classifier.
For example, the command:
mvn -DmyProperty=specialValue package install:install-file -Dfile=target/my-ear.ear -DpomFile=my-ear/pom.xml -Dclassifier=specialVersion
Builds a version of an ear artifact with special properties and deploys the artifact to my repo with a classifier "specialVersion".
So, my repo can have my-ear-1.0.0.ear and my-ear-1.0.0-specialVersion.ear.
Good morning'
Team Foundation Server 2010 question.
Do I need to create a Build Definition for every branch I have ?
Is there a way to parametrize 'Workspace' in Team Build 2010 for different branches, so we could just queue a new build specifying the workspace paths?
I tried finding out how TFS retrieves the workspace paths from the workspace used in the build, but the xaml got me clueless since there are parameters for everything except the mapped paths.
Thanks in advance!
Do I need to create a Build Definition for every branch I have ?
No, but you may want to in order to have a cleaner implementation.
Is there a way to parametrize 'Workspace' in Team Build 2010 for
different branches, so we could just queue a new build specifying the
workspace paths?
Yes- but it isn't as straight forward (unless you are using .proj files still).
If you are using the upgrade template and still using proj files:
Building multiple branches, can I use paramters to identify the target branch.
If you are not using the upgrade template, this answer posted on SO will help point you in the right direction:
How to make build definition in TFS Build 2010 configurable w.r.t input variable values and “items to build”
I wonder what is the Maven way in my situation.
My application has a bunch of configuration files, let's call them profiles. Each profile configuration file is a *.properties file, that contains keys/values and some comments on these keys/values semantics. The idea is to generate these *.properties to have unified comments in all of them.
My plan is to create a template.properties file that contains something like
#Comments for key1/value1
key1=${key1.value}
#Comments for key2/value2
key2=${key2.value}
and a bunch of files like
#profile_data_1.properties
key1.value=profile_1_key_1_value
key2.value=profile_1_key_2_value
#profile_data_2.properties
key1.value=profile_2_key_1_value
key2.value=profile_2_key_2_value
Then bind to generate-resources phase to create a copy of template.properties per profile_data_, and filter that copy with profile_data_.properties as a filter.
The easiest way is probably to create an ant build file and use antrun plugin. But that is not a Maven way, is it?
Other option is to create a Maven plugin for that tiny task. Somehow, I don't like that idea (plugin deployment is not what I want very much).
Maven does offer filtering of resources that you can combine with Maven profiles (see for example this post) but I'm not sure this will help here. If I understand your needs correctly, you need to loop on a set of input files and to change the name of the output file. And while the first part would be maybe possible using several <execution>, I don't think the second part is doable with the resources plugin.
So if you want to do this in one build, the easiest way would be indeed to use the Maven AntRun plugin and to implement the loop and the processing logic with Ant tasks.
And unless you need to reuse this at several places, I wouldn't encapsulate this logic in a Maven plugin, this would give you much benefits if this is done in a single project, in a unique location.
You can extend the way maven does it's filtering, as maven retrieves it's filtering strategy from the plexus container via dependency injection. So you would have to register a new default strategy. This is heavy stuff and badly documented, but I think it can be done.
Use these URLs as starting point:
http://maven.apache.org/shared/maven-filtering/usage.html
and
http://maven.apache.org/plugins/maven-resources-plugin/
Sean