What's the recommended approach to store passwords/users used in deployments for Bamboo? - bamboo

When creating a release of a build, the release contains all of the variables as they were set up in the system when the build was made, this includes global variables like deployment credentials (users and passwords).
A deployment for a release of a build made a week ago that's now being promoted to production is failing because the deployment credentials have been changed since last week. When rolling back the deployment this also fails for the same reason, the credentials are not up-to-date.
Is there a way to update the variables of a build or have variables that should be used only for deployment projects?

you can define variables at Deployment project level, not plan

Related

Providing environment variables with vuejs and azuredevops

Right now I am building a project and using vuejs for the front end. When testing locally, creating a .env.developement and .env.production work fine when in different environments and will show variables correctly. My issue now comes when building in azure devops. I am pointing to the dist folder and this is, obviously, only providing production variables which makes sense.
Is there a way to pass in dev vs prod environment variables to vuejs to build against in a azure devops/vue project?
Seems like there is something "magical" about the way vue is injecting these files into the index.html file and I cant pinpoint how vue is deciding which env variables to use.
Seems to me a question not related with Azure DevOps Pipelines but with Vue compile process.
I don't know a thing about Vue, but if it works similarly to other javascript /typescript frameworks, you should specify the environment in your build tasks.
In my Angular projects I may create a npm task specifying which environment to choose (i.e. npm run build:prod or npm run build:pre). And then, in my Azure Pipelines run the right task depending on the environment I'm going to deploy (you may even store the output in different build artifacts depending on the environment, so you'll have all those artifacts available in your deployment pipeline).
Finally (just a recommendation) I would recommend you to review which values you store in your .env.production file, just to be sure that it's safe to store that file in a repository. If you have some sensitive information, I'd recommend you to use Pipeline Variables instead. Pipeline Variables may be keep hidden, available only for the DevOps Team.
Regards.

Azure DevOps multi CI/CD

I have a following use case:
We have one solution that contains 5-10 different services (.NET Framework Web Apps of various versions) within. We have to setup CI/CD in Azure DevOps to be able to automate the deployments of each services separately (or all services at once). There will be around 5 different environments for each service.
Challenges:
We are trying to avoid having (# of services X # of environments) seperate builds and releases (~50 build/ ~50 release).
We do have to be able to deploy one service alone without others being affected.
We do have to be able to deploy ALL services all at once for mass deployments.
P.S. We are currently using trunk based development but I am thinking about moving to giflow to have branch based triggers as I feel it would be easier to manage in this case.
CI - handled by your build server (e.g. teamcity). Responsibility: Build, Test, Obfuscate, Create Packages and lastly push Packages to nuget server (.net specific). Traditionally besides the app code you also need at least 2 other packages: db migrations, infra migrations.
You build packages once and deploy the exact version everywhere else you want it to go.
https://gist.github.com/leblancmeneses/1d352bb79447cd7a486598c4dc796ef1
This script works in conjunction with https://github.com/leblancmeneses/RobustHaven.DevOps
CD - handled by something like octopus deploy. Responsibly: orchestrate deployment process across your cluster. Octopus pulls packages from nuget server and moves them to what ever environment you want and to whatever machines encompasses that environment.
https://www.robusthaven.com/presentations/DevOps
you dont really need 50 builds, you can use a single build per service (assuming builds for different environments are identical) and build from different branches. technically you can get away with a single release for 50 environments if you create your triggers\phases properly, but that would be a mess, just create a single one for each environment. I cant see how managing 50 environments on a single release is manageable.
when yaml release pipelines arrive, this becomes trivial, right now its not, unfortunatelly.

Is there a difference between the staging and production env in code push?

I accidentally push the binary with staging key. Is there any real difference between the two stages(in terms of cli / library setting) aside from the obvious naming differences ?
Will I have problem trying to push updates using the staging env?
Code push Staging deployments are for debug builds (app-debug.apk)s while Production is as you guess, production releases (app-release.apk)s.
Refer to this text on their README here, Saying:
And that's it! Now when you run or build your app, your debug builds will automatically be configured to sync with your Staging deployment, and your release builds will be configured to sync with your Production deployment.
In your case I think you won't have any problems pushing updates with staging env as it a feature but they will be limited to app-debug.apks and not app-release.apk ones.
I would guess you wrote something like
code-push release-react <appName> <platform>
Then it said something like this
Upload progress:[==================================================]
100% 0.0s Successfully released an update containing the
"/tmp/CodePush" directory to the "Staging" deployment of the
"APP_NAME" app.
This is staging and should be used to test your app in the devices you installed the app-debug.apk bundle so you know how your update is going to work.
If you are okay with it, then you should promote it to the Production builds with
code-push promote APP_NAME_HERE Staging Production
Or Follow this answer here: How to update "Production" deployment using Code Push CLI?
to just release an update straight to production builds.
To answer your question:
Is there any real difference between the two stages(in terms of cli /
library setting) aside from the obvious naming differences
I can say - no, there is no difference and its up to you to decide how to build your workflow (although there are some practices in terms of how you can use it e.g. https://github.com/Microsoft/react-native-code-push#multi-deployment-testing).
The difference between two of this is more on semantically level and how you will use it depends upon you.
Moreover you can create arbitrary number of deployments if having a staging and production version of your app is enough to meet your needs.
You can use code-push deployment add <appName> <deploymentName> for this.
Also you can rename/delete deployments if it is needed.

How to avoid a build and deployment of dependencies which have no code changes

I'm doing a proof of concept on continuous integration and whether our development team will benefit from automated builds and automated deployments to reduce human error.
I've already come quite far in the process but have some questions on how to configure our incremental builds to avoid rebuilding of dependencies that had no code changes.
In addition I’d like our deployment tool to identify and deploy only assemblies rebuilt as a result of a code change.
We already use Microsoft products like TFS for source control, Visual Studio for development and Team Foundation Build for continuous integration builds. We’re currently leaning toward InRelease for deployment as it seem to integrate well with Team Foundation Build.
But first, here is our current setup...
There are 200+ C# solution files, each containing one or more projects. It is not practical in the environment to combine these projects into less solutions, i.e. by design. Projects within a solution uses project references to resolve dependencies and file references to projects in other solutions. As far as I know, this is the recommended approach by Microsoft when dealing with a large amount of projects.
We use a "branch by feature" strategy e.g. isolated development on concurrent features branches which is merged up to a stable Main branch when complete. When it's time for a release, a release is branched from main and isolated for hotfixes and deployment. The feature branches and main branch have a CI build triggered by code check-ins. Releases will mostly like be manually executed from InRelease against a selected release branch. A release will be deployed through various environments including INTEGRATION/TEST, UAT and ultimately to all our clients. We're still fleshing out the details of branching strategy, but that's a question for another time.
The current problems to solve:
1. Avoid rebuilding of dependencies that have no code changes...
When we deploy new functionality or a patch to a client, we want to push the absolute minimum in files. Our company has a very large customer base (thousands of customers) with sometimes slow internet connections, so doing a full deployment of all assemblies (200+) to every customer is not an option. I've partially solved the problem by setting up incremental builds which correctly rebuilds only changed projects as expected but also rebuilds all the dependent projects even though NO CODE CHANGES were made to them. This results in both the changed assemblies and dependencies having new timestamps. If we use the change of timestamp to identify which assemblies to deploy, then this would result in deployment of functionally unchanged assemblies. The goal here is to deploy only assemblies where the code has changed and assemblies where breaking changes occur.
For example:
Solution B, has a project called Project B
Solution A, has a project called Project A
Project B -> Project A (where Project A has a file dependency on Project B)
When a non-breaking change is made in Project A, say to the interior of a method, then the expected result is: only A is built and therefore a candidate for deployment.
When a breaking change is made in Project A then that will break Project B, the expected result is: Both A and B is built and therefore a candidate for deployment.
Currently MSBuild rebuilds all dependents regardless, which is not what we want.
2. Automatically identify which assemblies should be deployed...
I have a partial solution to the problem.
When a build is performed, my build process template is configured to run a MSBuild script containing a list of solutions to build in a particular order.
This operation is performed in the build agent’s workspace. Every time a new build is performed the build process template creates an unique drop folder in the format and copies the binaries from the build agent workspace to the drop folder. This is out of the box functionality taken care of by the standard build process template. The build has been configured not to clear the build agent workspace, so the first time it runs it will build all projects within a solution but subsequent builds will only build projects that have code changes or is dependent on other projects (incremental build?). Therefore unchanged assemblies will have the original time stamps and changed assemblies will have new timestamps.
We have a tool that can do folder comparisons between drop folders and output the results to a txt file. This allows us to identify which binaries have been added/changed/removed since the last deployment. It also gives us the added benefit of comparing the list of actual artefacts to a manifest of expected artefacts as defined by the developer. This will ensure that no assemblies get deployed that has not been specified and proven to be unit tested.
The question is how can be we leverage InRelease to deploy only the required files as per the example above and not all files in the drop folder?
Install a TFS Proxy in before your build machine, this reduce the net traffic
You will start with a branch strategy like Service Pack, you can read a documentation about in ALM Rangers guidance... And adapt you process template to build just the part of code changed. I think in BRD Lite, another guidance by ALM Rangers, you will found more information.

Maven + SSDM Build and Runtime Environment Automation

Preface:
My Company, like most, has several run-time environments and several release versions which themselves are composed of different versions of various jars.
For example, let us consider release versions 1.1, 1.2, and 1.3 of Software X, which may be deployed to a developer computer, testing, or production.
Software-x-1.1 is itself composed of jarA-0.9.1 and jarB-0.7.5, but software-x-1.3 is composed of jarA-1.7.31 and jarB-0.8.1.
Currently we use Spring's PropertyPlaceholderConfigurer to configure run-time variables (such as database credentials), however, properties also change with release versions.
We also use Maven 2 POM version 4 to specify which versions of our code need to be used. We place the version numbers of our jars as properties within profiles (dev,test,prod) inside of the parent pom and then reference those version numbers in all project poms.
As of right now, we have no way to specify which project versions pertain to a given release other than the most current one. Moreover, we deploy our run-time configurations to the SSDM pickup which then configures and creates the services defined by the built versions of our software.
--
Questions:
Is there any procedure/tool we can use to build our product by merely providing the run-time environment and version number? IE "build 1.1 dev"?
Is there anyway we can store the required jar versions for each release build? We are currently versioning all files, including the parent pom, but merely versioning the parent pom does not record which release version is pertinent to that parent pom.
What else can we do to further automate the process of builds?
For example, if we could manage run-time configurations within the parent pom that would be a step in the right direction, but that seems like a violation of scope.
Any tool outside of our framework is inconceivable at this point, but not in the far future.
Summary:
How can we automate our build process to the fullest extent without being error prone?
Based on the part for released version 1.1, 1.2 and 1.3 of the Software X it seemed to be right way to use profiles to handle differences between test, production etc. environments.
The software itself is an other story. I assume you are using a version control tool (VCT) to store the state of your development. So during the preparation of Software-x-1.1 you change your root pom and define the dependencies (jarA-0.9.1, jarB-0.7.5). Make a Tag Release 1.1. and than continue to Release 1.2...during the development of Release 1.3 you decided to change the dependencies (to jarA-1.7.31 and jarB-0.8.1) which results in a change to the pom's or your root pom only). May be i oversight your real problem.
If I summarize your problem: you want to manage release of versions across multiple environments, and you release distribution is an aggregate of executable (jars) as well as environments properties. Different versions of these deploy-able distributions propagate to diff env at different stages with there own set of env properties and you are looking at a way to have a common roll out (or may be release process) to handle all of this.
It seems the first problem you have is that you run a build per release per environment when you are propagating a release. If I am not wrong, you should try looking at your app architecture first to see if there is a way you can create environment independent binaries, in some cases projects prefer keeping properties as a separate module which is deployed along with the jars, and a Property Manager of sorts which figures reads the files, so you may have a maven module called properties, which bundles one zip each for every env set of property files. Your deployer script can then be given a parameter while running on which zip file to extract to a location from where the properties can be read into the application. What you gain this way is that you "create one release distribution per release - which has contents to run on all environments".
Also, is it the case that you release version is "not" the version that you have in POM? if not aligning your release version to POMs should be done. i.e. POM should be 1.3-SNAPSHOT when you are working on development phase of that release, and be bumped off to 1.3 in a branch when you are releasing it.
There are no one size fits all solutions for such things but practices similar to this one do help to a good extent.
PS Do let me know if I got your problem right, or have ended up beating around the bushes ;-) DS.