How can a config be stored in a release as described in the 12 factor app manifesto? - config

The fifth factor of the 12-factor app manifesto is "build, release, run". It says that
the release stage takes the build produced by the build stage and combines it with the deploy's current config. The resulting release contains both the build and the config...
The third factor "Config" explains that the configuration is stored in environment variables. It sounds like a contradiction to me.
If the config is stored in environment variables, how can it be contained in the release? A docker file is the only possibility I could think of, but this would be specific to Docker.

Related

One Repository With Multiple Deployments, Environment Variables, and Secrets on Vercel?

I'm doing some early research for a project I plan to deploy to Vercel. I am wondering if the following is possible:
I want to have on GitHub repository. This repository will use environment variables for API tokens, and basic settings.
I have three versions of the project that I want to create. Instead of creating three separate repositories, I'd rather have one repository, and then have the slight differences made using environment variables. This will make updates, fixes, etc much easier.
So, my question is: Is it possible to deploy one repository three times, each with different environment variables, using Vercel?
Yes, possible in deploying multiple environments in 1 repository. This can be done by importing your project to Vercel. For evey commit you made on the git repo, there is a completely new environment created for that. See https://vercel.com/docs/v2/git-integrations
You may also opt to create different git branches for each environment, and Vercel will take care in creating new environment for them. See https://vercel.com/docs/v2/git-integrations/vercel-for-github#a-deployment-for-each-push
With regards to environment variables, here's what the doc says:
The maximum number of Environment Variables per Environment per Project is 100. For example, you can not have more than 100 Production Environment Variables.
Moreover, the total size of Environment Variables applied to a Deployment (including all the Environment Variables Names and Values) is limited to 4kb. Deployments made with Environment Variables exceeding the 4kb limit will fail during the Build Step.
- https://vercel.com/docs/v2/platform/limits?query=environment%20va#environment-variables
Environment Variables: https://vercel.com/docs/v2/build-step#environment-variables
Yes, they give you Production, Preview, and Development environments. Each has their own environment variables you can save via the UI, or you can download the .env via the cli with vercel env pull.
https://vercel.com/docs/build-step#environment-variables
Multiple Vercel projects can be created for the same GitHub repo.
In other words, there is no restriction like only a single Vercel project can be created for the single GitHub repo.
Then, different environment variables can be set for different Vercel projects.
Pushing a commit to the GitHub repo triggers build & deploy of multiple Vercel projects.
Referece: https://github.com/vercel/vercel/discussions/4879#discussioncomment-356114

Is the .gitlab-ci.yml available for jobs with GIT_STRATEGY=none in Gitlab CI?

The Gitlab documentation says the following about GIT_STRATEGY: none:
none also re-uses the project workspace, but skips all Git operations (including GitLab Runner's pre-clone script, if present). It is mostly useful for jobs that operate exclusively on artifacts (e.g., deploy). Git repository data may be present, but it is certain to be out of date, so you should only rely on files brought into the project workspace from cache or artifacts.
I'm still a bit confused about how this is supposed to work. If the source code is not guaranteed to exist, then there might be no source in the project workspace and thus the .gitlab-ci.yml file would also be missing. Without a build script the job must fail. If the source is missing only part of the time depending on external factors, the job will fail randomly, which is even worse than failing every time. However, if it fails every single time then what's the point of the feature?
Another possibility I see is that .gitlab-ci.yml might be injected at runtime, so that even without a fresh copy of the repository there would be a build script. If so, could I define further files from my repository to inject into the build process? What are the restrictions on these particular jobs?
Yes, the .gitlab-ci.yml file is not copied onto the system just like all the other files. But that doesn't matter as the job is not run from the file. The job is run as a script on your target (and even before that as it defines the target it will run on). It is not possible to copy only selected files without a git clone although you may want to copy the files from some other server.
A good example of when you want to run GIT_STRATEGY: none are things like slackchat notifications as last stage of a build when you really don't want to clone gigabytes of repository data just to push a notification.

TeamCity Snapshot Dependencies without triggering rebuild

TLDR: How can I arrange it so that a snapshot dependency does not trigger new builds?
For my test processes to run, they need to run on a "Test" environment. Creating such an environment is simple, but lengthy; it can take as much as 45 minutes to an hour to finish building a test environment. Further, the name of the environment, and other such variables, is not fixed until the environment has finished building.
In my TeamCity build definition, I could put "build environment if missing" as a build step. However, that means that the first test of the day will take 45 minutes to run.
Instead, we created a separate build, that is scheduled to run every morning, that builds the test environment for the day. Our test build then has a snapshot dependency to that build in order to use the parameters of that build to determine the environment information, and everything works as expected, except for one issue:
When a new test is run, it frequently seems to trigger a rebuild of the test environment creation.
We don't want this to ever happen; the test environment creation is 'done' for the day and should not need to run again until tomorrow. How can we achieve this?
You already have time based trigger => environment will be prepared every morning
Create Snapshot Dependency in your product TC configuration (not in that one which is preparing test environment) and tick 'Do not run new build if there is a suitable one'
Your configuration used to setup test environment should not have any VCS root (or point to some calm place of source control where submits will not happen). To physical setup your environment you should not need any source code mapping etc. - you may consume everything needed through your own NuGet packages for example.
Note: In this workflow every build of your real project will enlist build of configuration which sets test environment (so it's physically in build queue) but
when it's turn come up it will compare changes since last build => no submits on VCS found (it's pointing to calm place in SourceControl due step 3)) and so build will be skipped in <1s

Deploy multiple configurations from command line without changing project files

Please don't be too harsh, because I do not grasp this entirely correctly still, but msbuild/msdeploy is giving me some headaches lately.
Hopefully someone can provide a textual aspirin of some kind? So here is what I want to do:
I have a web application project, that has multiple configurations, thus multiple web.config-transforms.
I would like to deploy this project from command line.
I would rather not want to modify its project file. (I want to be able to do this for several web applications so as least as editing as possible is much appreciated)
I would like to be able to build it only once and then deploy the different configurations from it.
So far I deployed from command line using something like this:
msbuild D:\pathToFile\DeployVariation01.csproj
/p:Configuration=Debug;
Platform=AnyCpu;
DeployOnBuild=true;
DeployTarget=MSDeployPublish;
MSDeployServiceURL="localhost";
DeployIisAppPath="DeployApp/DeployThis01";
MSDeployPublishMethod=InProc
And this performs just what I want, except it only deploys the "Debug"-Configuration.
How can I, with minimal adjustments, make it deploy my other configurations as well?
I was thinking maybe I could build a package that includes all my configurations and then deploy from that and decide "while deploying" which configuration to deploy?
Unfortuanetly I am pretty much stuck here, the approaches I have read about all seem to require some modifications to project files, is there a way around that?
UPDATE:
I am still not really where I want to be here :).
But I looked into this PackageWeb-approach (also interesting video about that here) and it seems pretty nice; I can now build a package that includes all my transforms and then deploy from that as often as I want into multiple configurations.
One thing that I dislike about this is that I have to store my password in plain text into the generated parameters file for the powershell script, does someone know a way around this, I really would rather have that being an encrypted password.
Also other approaches to solve my original problem are still appreciated.
I am working on the same problem and am taking two paths using Microsoft Web Deploy or MSDeploy which is now in version 3.0.
I first compile the project using MSBUILD using the Package target passing in system.configuration, system.packagelocation. The Package Target generates a set of package files including a {PackageName}.SetParameters.xml file. The SetParameters.xml file by default allows on-publish changes to ConnectionStrings without recompiling when using msdeploy.exe to publish the file. The publish transformation process can also be customized by adding a parameters.xml file to the process defining additional parameterized web.config settings which can be changed at deploy time.
After the initial build I use the {PackageName}.deploy.cmd file generated by MSBUILD during the Package process to deploy the package to the target website. The Package process essentially duplicates the process you are currently doing from MSBUILD in that I can publish one Build-Configuration web.config transform from one compile. The process provides a consistent deployment process that can target remote servers from a central CI environment, which is great from a purely deployment process. The PackageBuild/Deploy process is parameterized within TeamCity, requiring changes to only a few parameters to setup a new deployment.
Like you, I cannot, however, compile a single version of code and deploy to multiple servers using the process as it exists today - which is my current focus. I want to parameterize the transform in a Continuous Deployment, build-once-deploy-many pattern to Dev, QA, User Testing, Staging, and Production.
I anticipate using one of two methods:
Create a Parameters.xml file for each project defining the variable deployment parameters along with a custom {ServerName}.SetParameters.xml for each target deployment, both to be used in conjunction with msdeploy.exe.
a. I am not sure defining a parameters.xml is a flexible enough process for my needs as the current project inserts and removes a variable number of web.config settings. Implementing a parameters file incorporating all of the variables could be too complex for my taste. I would also end up creating all of the target transformations, instead of the current developers initiated process. Not ideal.
I am following up on very recent updates to VS2012 Web Tools 2012.2 which allow tying a web.config transform to the publish profiles (profile.pubxml) now stored under SolutionName/Properties/PublishProfiles in VS2012.
VS2012 release 2012.2 adds the capability to create a second transform tied to the publish profile. The resulting transform process first runs the build configuration transformation, followed by the publish transformation, i.e. Release Transform followed by TargetServer Transform. Sayed Hashimi has a great YouTube video demonstrating the entire process using MSBUILD.
What is not entirely clear is whether the second transform is supported separately from the build using MSDeploy in a Continuous Deployment, build-once-deploy-many Pattern, or if the publish transformation is only supported during a separate Package/Build for each target transformation.
Option 1 will definitely work for some environments and was my first plan for tackling a Continuous Deployment process. I would much rather use Web Transforms to accomplish the process if possible.
An outside third possibility is using one of several CodePlex commandline projects that are capable of transforming web.config using the XDT transform engine. Unfortunately, using these tools would mean splicing the results into the Build/Package MSBUILD process in order to get the resulting web.config transformation into the deployment package - something I've not yet been successful in accomplishing. Sayed Hashimi also has a PackageWeb project from 2012 that might work as well. I am hoping his more recent work replaces the need for the extra steps involved in the packageweb solution.
Let me know if you decide on a solution - as I am definitely interested.

Maven best practice for generating artifacts for multiple environments [prod, test, dev] with CI/Hudson support?

I have a project that need to be deployed into multiple environments (prod, test, dev). The main differences mainly consist in configuration properties/files.
My idea was to use profiles and overlays to copy/configure the specialized output. But I'm stuck into if I have to generate multiple artifacts with specialized classifiers (ex: "my-app-1.0-prod.zip/jar", "my-app-1.0-dev.zip/jar") or should I create multiple projects, one project for every environment ?!
Should I use maven-assembly-plugin to generate multiple artifacts for every environment ?
Anyway, I'll need to generate all them at once so it seams that the profiles does not fit ... still puzzled :(
Any hints/examples/links will be more than welcomed.
As a side issue, I'm also wondering how to achieve this in a CI Hudson/Bamboo to generate and deploy these generated artifacts for all the environments, to their proper servers (ex: using SCP Hudson plugin) ?
I prefer to package configuration files separately from the application. This allows you to run the EXACT same application and supply the configuration at run time. It also allows you to generate configuration files after the fact for an environment you didn't know you would need at build time. e.g. CERT
I use the "assembly" tool to zip up each domain's config files into named files.
I would use the version element (like 1.0-SNAPSHOT, 1.0-UAT, 1.0-PROD) and thus tags/branches at the VCS level in combination with profiles (for environments specific things like machines names, user name passwords, etc), to build the various artifacts.
We implemented a m2 plugin to build the final .properties using the following approach:
The common, environment-unaware settings are read from common.properties.
The specific, environment-aware settings are read from dev.properties, test.properties or production.properties, thus overriding default values if necessary.
The final .properties files is written to disk with the Properties instance after reading the files in given order.
Such .properties file is what gets bundled depending on the target environment.
We use profiles to achieve that, but we only have the default profile - which we call "development" profile, and has configuration files on it, and we have a "release" profile, where we don't include the configuration files (so they can be properly configured when the application is installed).
I would use profiles to do it, and I would append the profile in the artifact name if you need to deploy it. I think it is somewhat similar to what Pascal had suggested, only that you will be using profiles and not versions.
PS: Another reason why we have dev/ release profiles only, is that whenever we send something for UAT or PROD, it has been released, so if there is a bug we can track down what the state of the code was when the application was released - it is easier to tag it in SVN than trying to find its state from the commit history.
I had this exact scenario last summer.
I ended up using profiles for each higher environment with classifiers. Default profile was "do no harm" development build. I had a DEV, INT, UAT, QA, and PROD profile.
I ended up defining multiple jobs within Hudson to generate the region specific artifacts.
The one thing I would have done differently was to architect the projects a bit differently so that the region specific build was outside of the modularized main project. That was it would simply pull in the lastest artifacts for each specific build rather than rebuild the entire project for each region.
In fact, when I setup the jobs, the QA and PROD jobs were always setup to build off of a tag. Clearly this is something that you would tailor to your specific workplace rules on deployment.
Try using https://github.com/khmarbaise/multienv-maven-plugin to create one main WAR and one configuration JAR for each environment.