Say I have a standardised Maven build that I can describe in Gitlab CI.
I would like to reuse this Maven build in 100 repositories.
What is the best way to achieve this with as little code duplication as possible?
For reusing job definitions, GitLab CI Templates fill this use case well. There's many ways to include a template in your project configuration by using the include: keyword. It might look something like this:
# your template file
# mytemplate.yml
maven-build:
variables:
MY_MAVEN_OPS: "" # variables may be useful for customizing behavior easily
stage: build
script:
- maven ... # you build this
Then in your 100 other projects, you may have something minimal in the .gitlab-ci.yml to reuse the above configuration like:
include:
- project: jfabianmeier/templates
file: mytemplate.yml
ref: main # or a tag, git SHA, or whatever
You can also configure a project to use a remote CI configuration, so committing a configuration to all projects can be avoided entirely. This setting can be configured with the GitLab API, too. This would certainly be the solution with the absolute minimal amount of repeated code, albeit somewhat inflexible.
Related
I am trying to learn how YAML specs works in Bamboo. So far I achieved to deploy the plan following the official documentation. enter link description here
The documentation explains that you need to create a bitbucket repository, create bamboo.yml, set a new project in bamboo, enable a bamboo specs repository and finally you get your plan created and based in YAML specs.
My question is, can I create a plan.yml and deploy it from other bamboo plan?
For example, for JAVA specs, it is enough to checkout a repo with several *.java specs files and use maven and a pom file to deploy all the plans.
Can I do something similar with YAML specs? To have a folder in some SCM with several *.yml files and deploy them simultaneously. As a result, to have a lot of plans in bamboo deployed and based on the yml files.
yes and no, yaml can't be sent to the server as you can do with java specs. It needs to be committed to the repo first
you also need to have your different project created prior to committing the yaml specs and or have that repo granted access to each individual project or enabled the flag on the linked repo to allow access to all projects in the specs tab.
if this is not an issue,then yes there is no problem defining multiple plans in your bamboo specs yml file, even across multiple projects, as long as they are split up in separate yaml documents (with "---")
to sum up the components and environment:
multi-project, typically each gradle project is soley in a seperate git
you don't want to use submodules
gradle init scripts in a seperate config / super repository
using gradle wrapper
for the GUI guy: IntelliJ IDEa with Gradle integratiom -> help
allowed to use gradle idea -> guide
so,
Q: How to elegant marriage these components. How can I define an init script to be used in the wrapper of a single repository without affecting other repositories.
I know:
init scripts are typical in a "GRADLE_HOME" directory
init scripts can be defined per console via -I
(yes, I read the documentation 😅 )
Problems found:
intelliJ doesn't allow to define the -I option in UI
anyone needs to checkout and update a seperate repository if you want to share between projects
the settings.gradle || gradle.properties file seems not to support any option either
Constraints:
(while these are possible answers, they are neither elegant nor fault proof)
the desired solution should be applicable for SINGLE projects, and should not be globally applied to all projects on the same computer
Hidden Questions:
can I include global gradle settings from an URL so noone needs a clone of the meta-repo??
does an URL include do the same as an init script? Or what you can do with initScript what you can't in include?
You can do the following:
Create a custom gradle distribution with the common settings defined in the init script
Configure your projects to use that distribution through the distributionUrl key in the gradle/wrapper/gradle-wrapper.properties
Use regular gradle build from command line/usual import into intellij - it just works
By the way, there is a gradle plugin for simplifying custom gradle distribution construction
You can use the buildSrc customization - depending on what you need -
where buildSrc/build.gradle takes effect prior configuration phase of your project.
What you should know, that there is a different scope, i.e. buildSrc/build.gradle's allprojects is scoped to any project beneath buildSrc and not your normal projects.
More generally speaking: buildSrc/build.gradle is like what you do normally in buildscript or task declarations in script plugins and you can write clean plugin code without publish it as plugins.
⚠️ Limitations:
you can't take care about plugin resolution - therefor you have to get into your projects settings.gradle
you can't change dependency management for your projects - you still have to do this in your project's buildSrc
for both you can see How can the gradle plugin repository be changed?
you still have to apply (even self buildSrc homed) plugins in your project (what is a good thing if you ask me, because it's more visible / clear what happens)
you can't share this with a second repository - without using git submodules, etc.
I have a alot of jobs on Hudson, most of which are really small and consist of just a few modules. But one is big and consist of several modules.
When ever I make a commit to our subversion repository for any of those several modules in that big job, Hudson builds the entire job instead of just the module that have changed.
It doesn't matter if I just scm-polling or a subversion hook, the result is the same.
It seems to me like it would be better if the modules where built instead of the jobs since the other modules in other jobs have dependencies to the modules and not to the jobs.
Can this be configured or do I have to create several jobs instead of the big one? And if so, can I configure the big job to never build when any of it's modules are being triggered but still build when it's own pom.xml is changed?
Thanks.
Hudson has an "Incremental Build" option in the Maven area of the job configuration.
It's hidden in the "Advanced" area.
You could make use of the reactor plugin. For example:
mvn reactor:make-scm-changes
This will only build those modules that have been changed in the SCM. Follow the link for other examples.
Doesn't your compiler offers you the incremental compile option? The java 1.6 compiler usually searches for class and source files and decides using the timestamp to determine whether to use the source or class file. Just leave out the clean goal when building your code.
Another option would be to first run a batch/shell script to determine what files changed and delete the corresponding class files so that the compiler incrementally builds the class files that are missing.
I have a situation that I'm sure must be fairly common. I have some Maven-built applications that deploy to different types of application server - like Tomcat, JBoss, etc.
The build processes 'tunes' the deployable artifact to the specific target type of application server (for example, different included dependencies, context roots, other config). This tuning is controlled with build profiles (-Ptomcat, -Pjboss etc)
So, for a given version of my application, I need to run builds that produce different deployables. I run mvn -Ptomcat clean package for example and I get an artifact in my /target directory that is the tomcat-tuned version.
The best approach I've been able to come up with so far is to specify finalnames for the artifacts that include the profile information, but for that approach, I'm not sure how to configure Maven to copy the final artifact off to some specific location so that the next build for a different type doesn't overwrite it.
Is this a good approach? If so, how can I achieve that final copy?
Or is there a better way?
You'll need to use Maven Assembly Plugin.
I have a project that need to be deployed into multiple environments (prod, test, dev). The main differences mainly consist in configuration properties/files.
My idea was to use profiles and overlays to copy/configure the specialized output. But I'm stuck into if I have to generate multiple artifacts with specialized classifiers (ex: "my-app-1.0-prod.zip/jar", "my-app-1.0-dev.zip/jar") or should I create multiple projects, one project for every environment ?!
Should I use maven-assembly-plugin to generate multiple artifacts for every environment ?
Anyway, I'll need to generate all them at once so it seams that the profiles does not fit ... still puzzled :(
Any hints/examples/links will be more than welcomed.
As a side issue, I'm also wondering how to achieve this in a CI Hudson/Bamboo to generate and deploy these generated artifacts for all the environments, to their proper servers (ex: using SCP Hudson plugin) ?
I prefer to package configuration files separately from the application. This allows you to run the EXACT same application and supply the configuration at run time. It also allows you to generate configuration files after the fact for an environment you didn't know you would need at build time. e.g. CERT
I use the "assembly" tool to zip up each domain's config files into named files.
I would use the version element (like 1.0-SNAPSHOT, 1.0-UAT, 1.0-PROD) and thus tags/branches at the VCS level in combination with profiles (for environments specific things like machines names, user name passwords, etc), to build the various artifacts.
We implemented a m2 plugin to build the final .properties using the following approach:
The common, environment-unaware settings are read from common.properties.
The specific, environment-aware settings are read from dev.properties, test.properties or production.properties, thus overriding default values if necessary.
The final .properties files is written to disk with the Properties instance after reading the files in given order.
Such .properties file is what gets bundled depending on the target environment.
We use profiles to achieve that, but we only have the default profile - which we call "development" profile, and has configuration files on it, and we have a "release" profile, where we don't include the configuration files (so they can be properly configured when the application is installed).
I would use profiles to do it, and I would append the profile in the artifact name if you need to deploy it. I think it is somewhat similar to what Pascal had suggested, only that you will be using profiles and not versions.
PS: Another reason why we have dev/ release profiles only, is that whenever we send something for UAT or PROD, it has been released, so if there is a bug we can track down what the state of the code was when the application was released - it is easier to tag it in SVN than trying to find its state from the commit history.
I had this exact scenario last summer.
I ended up using profiles for each higher environment with classifiers. Default profile was "do no harm" development build. I had a DEV, INT, UAT, QA, and PROD profile.
I ended up defining multiple jobs within Hudson to generate the region specific artifacts.
The one thing I would have done differently was to architect the projects a bit differently so that the region specific build was outside of the modularized main project. That was it would simply pull in the lastest artifacts for each specific build rather than rebuild the entire project for each region.
In fact, when I setup the jobs, the QA and PROD jobs were always setup to build off of a tag. Clearly this is something that you would tailor to your specific workplace rules on deployment.
Try using https://github.com/khmarbaise/multienv-maven-plugin to create one main WAR and one configuration JAR for each environment.