Is there a way to know the SCM url used in a Jenkins job programmatically? - maven-2

I'm currently creating a custom Maven enforcer rule, and for its purpose, I need to know the URL of the SCM (Subversion, Git...) used for the given job, i.e. where the sources were just checkouted by the Jenkins job.
Is there a way to get that information?
I had a look on the parameters set in system environment by Jenkins, but none of them give me the full SCM url used by the job. Also, the API (i.e. http://jenkins-server/job/my-job/api/xml) does not contain this information.
I know that there is a <scm><connection> tag in the pom.xml, but this information may not be reliable.
Thanks.

These values are set by the SCM implementation. To find out which environment variables actually exist in your build, you can use the EnvInject plugin.
For Subversion, you can use the environment variable SVN_URL (if it's one URL only), or SVN_URL_n, n being a number (if it's multiple URLs) as described here.
You could also use the System Groovy build step in the Groovy plugin to access the Jenkins object tree directly and e.g. write this information to a file. Note that accessing internals in prone to breaking in updates, and prone to breaking Jenkins if you're not careful.

Related

How do you see, set, and access variables in IntelliJ so that it can be passed from one plug-in to system properties?

Have a GitHub plug-in in IntelliJ. It knows the branch that is being worked on.
How would you get that branch name and add it into the system properties used when launching a server in the IDE?
This would be the equivalent of something like:
-Dthevariableforbranchnumber=${some.branch.number}
Support for variables in the parameters of the run/debug configurations was added recently, but there is no variable for the branch number.
There is also a related request to add branch to the live templates.
You are welcome to submit a new request at https://youtrack.jetbrains.com/issues/IDEA to provide a variable with the VCS branch for the run configuration.
Still not clear how it will work if the project has multiple VCS roots with different branches.

How to run same specflow tests against various environments?

I'd like to write one suite of SpecFlow tests that test my web application (using Selenium) in various environments.
so I have a test written like this
Given that I am on the login page
which in turn leads to a step definition that boils down to
driver.Navigate().GoToUrl("http://www.myapp.com/login.aspx");
However, I want my test to be able to run against "http://localhost" or `"http://test.myapp.com" as well, without having to recompile. The best idea I've come up with is to place these sorts of settings in the App.config file, but that has its problems as well.
Does anyone have suggestions on how best to achieve this? Basically I want to pass in environment settings for my tests at runtime.
You can do this by changing the config file through the build process using transforms and there are tools that will let you run the transform outwith the build process (so you don't have to manually change it and you avoid a build) using the command line. This has been talked about already on SO:
Web.Config transforms outside of Microsoft MSBuild?
For example using PowerShell.
I would still question whether you might be better and starting a local instance of the service that you wish to test, rather than connecting to something which is out of the tests explicit control. You could instead use a method similar to self hosting a web api or host a wcf service to do this for you. This way you can inject mocks, modify and reset the database, or perform any other action you want.
If that still isn't what you need, an alternative to config files would be to setup environment variables that can be read at run time, see How to pass Command line argument to specflow test scenario

Making StyleCop & Jenkins do my bidding

We're trying to set up Jenkins with the latest version of StyleCop.
Our existing Jenkins setup invokes StyleCop via StyleCopCmd via nant, but StyleCopCmd seems to be increasingly out-of-date, and unmaintained; and I’d rather cut it out. So best supported solution seems to be to invoke StyleCop from msbuild.
Our solution consists of multiple projects, but the Jenkins Violations plugin expects a single stylecop.violations.xml file, so the widely documented solution of importing StyleCop.targets and invoking it from each 'csproj' file seems like it won’t work (because this would produce multiple violations files, which the Jenkins plugin can't cope with).
SO:
Is there some way of merging multiple StyleCop violations files so that they are treated as one by the Jenkins Violations plugin, OR
Is there some way, in Msbuild, of peeking into multiple '.csproj' files, extracting the '.cs' files and running them all through StyleCop in a one-er. Alternatively:
Given we're using Jenkins and multiple project files, is there another way of reporting violations for all of the projects in the solution?
Any help gratefully received.
See this: http://ferritedog.wordpress.com/2011/05/27/1-hour-guide-to-continuous-integration-setup-jenkins-meets-net/
Basically, use the XML FileName Pattern **/*/StyleCopViolations.xml.

Maven best practice for generating artifacts for multiple environments [prod, test, dev] with CI/Hudson support?

I have a project that need to be deployed into multiple environments (prod, test, dev). The main differences mainly consist in configuration properties/files.
My idea was to use profiles and overlays to copy/configure the specialized output. But I'm stuck into if I have to generate multiple artifacts with specialized classifiers (ex: "my-app-1.0-prod.zip/jar", "my-app-1.0-dev.zip/jar") or should I create multiple projects, one project for every environment ?!
Should I use maven-assembly-plugin to generate multiple artifacts for every environment ?
Anyway, I'll need to generate all them at once so it seams that the profiles does not fit ... still puzzled :(
Any hints/examples/links will be more than welcomed.
As a side issue, I'm also wondering how to achieve this in a CI Hudson/Bamboo to generate and deploy these generated artifacts for all the environments, to their proper servers (ex: using SCP Hudson plugin) ?
I prefer to package configuration files separately from the application. This allows you to run the EXACT same application and supply the configuration at run time. It also allows you to generate configuration files after the fact for an environment you didn't know you would need at build time. e.g. CERT
I use the "assembly" tool to zip up each domain's config files into named files.
I would use the version element (like 1.0-SNAPSHOT, 1.0-UAT, 1.0-PROD) and thus tags/branches at the VCS level in combination with profiles (for environments specific things like machines names, user name passwords, etc), to build the various artifacts.
We implemented a m2 plugin to build the final .properties using the following approach:
The common, environment-unaware settings are read from common.properties.
The specific, environment-aware settings are read from dev.properties, test.properties or production.properties, thus overriding default values if necessary.
The final .properties files is written to disk with the Properties instance after reading the files in given order.
Such .properties file is what gets bundled depending on the target environment.
We use profiles to achieve that, but we only have the default profile - which we call "development" profile, and has configuration files on it, and we have a "release" profile, where we don't include the configuration files (so they can be properly configured when the application is installed).
I would use profiles to do it, and I would append the profile in the artifact name if you need to deploy it. I think it is somewhat similar to what Pascal had suggested, only that you will be using profiles and not versions.
PS: Another reason why we have dev/ release profiles only, is that whenever we send something for UAT or PROD, it has been released, so if there is a bug we can track down what the state of the code was when the application was released - it is easier to tag it in SVN than trying to find its state from the commit history.
I had this exact scenario last summer.
I ended up using profiles for each higher environment with classifiers. Default profile was "do no harm" development build. I had a DEV, INT, UAT, QA, and PROD profile.
I ended up defining multiple jobs within Hudson to generate the region specific artifacts.
The one thing I would have done differently was to architect the projects a bit differently so that the region specific build was outside of the modularized main project. That was it would simply pull in the lastest artifacts for each specific build rather than rebuild the entire project for each region.
In fact, when I setup the jobs, the QA and PROD jobs were always setup to build off of a tag. Clearly this is something that you would tailor to your specific workplace rules on deployment.
Try using https://github.com/khmarbaise/multienv-maven-plugin to create one main WAR and one configuration JAR for each environment.

Maven repository configurations

I've asked a similar question in which part of this was addressed, but I'd like to expand in more detail.
When configuring maven to look at internal repositories, is it best to put that information in the project pom or in a user's settings.xml? An explanation on why would be really helpful here.
thanks,
Jeff
You should always try to make the maven project so that it compiles from a clean checkout from source control in your local environment; without a settings.xml. In my opinion this means that you place any overrides to sensible default values in the user's settings.xml file. But the pom should contain sensible values that will work for everyone.
I encourage you to put the repository definition in the POM, this way any developer just grab a copy of the code and run Maven to get it compiled, without having to change things in his settings file.
I find the setting.xml file useful just for hacking Maven's behaviour in special situations, for example when one repository is not accessible due to a firewall and you need to use a mirror. But that's my personal opinion. Maven documentation gives you more freedom:
The settings element in the
settings.xml file contains elements
used to define values which configure
Maven execution in various ways, like
the pom.xml, but should not be bundled
to any specific project, or
distributed to an audience. These
include values such as the local
repository location, alternate remote
repository servers, and authentication
information.
If you have a local repository which is used in every single project you may add that at the settings.xml, just be sure that configuration is well documented, in my current project it's not and new developers struggle at the beginning when they try to compile something.
We use the user's settings.xml and include info in the README about what possible other repos may be needed.
In theory a given group-artifact-version is the same no matter which repo it comes from. It works pretty well for us. If you find yourself with two different assets that have the same group-artifact-version identifier, then that indicates you're doing something really bad.