How to handle multiple application in CI/CD pipeline of bamboo - bamboo

I have more than 15 application ( web application + console application ) to be build and deplyoed in various environments ( Dev, Qa and prod) via tool bamboo.
What will be the correct approach
1. Create separate project for each application
2. Create 1 project and put all the application under it ( plans, stages ).

Even if you have only applications related to one project/product, it's good to group applications in project for (at least) two reasons:
bamboo provides reporting (i.e. wallboard) across all plans in the project
when you create application that is not related to your project/product it will be separated
See chatper How is a Bamboo workflow organized on this page

Related

Azure DevOps multi CI/CD

I have a following use case:
We have one solution that contains 5-10 different services (.NET Framework Web Apps of various versions) within. We have to setup CI/CD in Azure DevOps to be able to automate the deployments of each services separately (or all services at once). There will be around 5 different environments for each service.
Challenges:
We are trying to avoid having (# of services X # of environments) seperate builds and releases (~50 build/ ~50 release).
We do have to be able to deploy one service alone without others being affected.
We do have to be able to deploy ALL services all at once for mass deployments.
P.S. We are currently using trunk based development but I am thinking about moving to giflow to have branch based triggers as I feel it would be easier to manage in this case.
CI - handled by your build server (e.g. teamcity). Responsibility: Build, Test, Obfuscate, Create Packages and lastly push Packages to nuget server (.net specific). Traditionally besides the app code you also need at least 2 other packages: db migrations, infra migrations.
You build packages once and deploy the exact version everywhere else you want it to go.
https://gist.github.com/leblancmeneses/1d352bb79447cd7a486598c4dc796ef1
This script works in conjunction with https://github.com/leblancmeneses/RobustHaven.DevOps
CD - handled by something like octopus deploy. Responsibly: orchestrate deployment process across your cluster. Octopus pulls packages from nuget server and moves them to what ever environment you want and to whatever machines encompasses that environment.
https://www.robusthaven.com/presentations/DevOps
you dont really need 50 builds, you can use a single build per service (assuming builds for different environments are identical) and build from different branches. technically you can get away with a single release for 50 environments if you create your triggers\phases properly, but that would be a mess, just create a single one for each environment. I cant see how managing 50 environments on a single release is manageable.
when yaml release pipelines arrive, this becomes trivial, right now its not, unfortunatelly.

Cucumber : Integration with two different projects

We have two different projects, one in JS and the other with Java, you know if a single cucumber layer can help me integrate between two project, let's say that one project (JS) is running some operations (testing) that appear on the web application and the second project (JAVA) is running the actions on a mobile device, so we want to be able to combine the two actions, mobile and web application, appreciate the help or ideas, thanks Eyal
1) Are they different projects or the same project but there are 2 or 3 out of web, ios or android versions?
2) Must you write your automation suite in the same language as the application to install test data or use it on this suite?
If you answered 1) with "different projects" or 2) with "Yes" - then it's not wise to do this.
If you want to write a library of helper functions for cross platform purposes, then you can definitely do that. I'm currently looking into the 3 platforms myself, dividing my library into modules and requiring the one I need at the start of run time.
EDIT
In your case, I would go for writing a Cucumber JVM framework, as Java has more robust libraries for working with Desktop and Android applications than JS, from what I've seen on this site.

How to avoid a build and deployment of dependencies which have no code changes

I'm doing a proof of concept on continuous integration and whether our development team will benefit from automated builds and automated deployments to reduce human error.
I've already come quite far in the process but have some questions on how to configure our incremental builds to avoid rebuilding of dependencies that had no code changes.
In addition I’d like our deployment tool to identify and deploy only assemblies rebuilt as a result of a code change.
We already use Microsoft products like TFS for source control, Visual Studio for development and Team Foundation Build for continuous integration builds. We’re currently leaning toward InRelease for deployment as it seem to integrate well with Team Foundation Build.
But first, here is our current setup...
There are 200+ C# solution files, each containing one or more projects. It is not practical in the environment to combine these projects into less solutions, i.e. by design. Projects within a solution uses project references to resolve dependencies and file references to projects in other solutions. As far as I know, this is the recommended approach by Microsoft when dealing with a large amount of projects.
We use a "branch by feature" strategy e.g. isolated development on concurrent features branches which is merged up to a stable Main branch when complete. When it's time for a release, a release is branched from main and isolated for hotfixes and deployment. The feature branches and main branch have a CI build triggered by code check-ins. Releases will mostly like be manually executed from InRelease against a selected release branch. A release will be deployed through various environments including INTEGRATION/TEST, UAT and ultimately to all our clients. We're still fleshing out the details of branching strategy, but that's a question for another time.
The current problems to solve:
1. Avoid rebuilding of dependencies that have no code changes...
When we deploy new functionality or a patch to a client, we want to push the absolute minimum in files. Our company has a very large customer base (thousands of customers) with sometimes slow internet connections, so doing a full deployment of all assemblies (200+) to every customer is not an option. I've partially solved the problem by setting up incremental builds which correctly rebuilds only changed projects as expected but also rebuilds all the dependent projects even though NO CODE CHANGES were made to them. This results in both the changed assemblies and dependencies having new timestamps. If we use the change of timestamp to identify which assemblies to deploy, then this would result in deployment of functionally unchanged assemblies. The goal here is to deploy only assemblies where the code has changed and assemblies where breaking changes occur.
For example:
Solution B, has a project called Project B
Solution A, has a project called Project A
Project B -> Project A (where Project A has a file dependency on Project B)
When a non-breaking change is made in Project A, say to the interior of a method, then the expected result is: only A is built and therefore a candidate for deployment.
When a breaking change is made in Project A then that will break Project B, the expected result is: Both A and B is built and therefore a candidate for deployment.
Currently MSBuild rebuilds all dependents regardless, which is not what we want.
2. Automatically identify which assemblies should be deployed...
I have a partial solution to the problem.
When a build is performed, my build process template is configured to run a MSBuild script containing a list of solutions to build in a particular order.
This operation is performed in the build agent’s workspace. Every time a new build is performed the build process template creates an unique drop folder in the format and copies the binaries from the build agent workspace to the drop folder. This is out of the box functionality taken care of by the standard build process template. The build has been configured not to clear the build agent workspace, so the first time it runs it will build all projects within a solution but subsequent builds will only build projects that have code changes or is dependent on other projects (incremental build?). Therefore unchanged assemblies will have the original time stamps and changed assemblies will have new timestamps.
We have a tool that can do folder comparisons between drop folders and output the results to a txt file. This allows us to identify which binaries have been added/changed/removed since the last deployment. It also gives us the added benefit of comparing the list of actual artefacts to a manifest of expected artefacts as defined by the developer. This will ensure that no assemblies get deployed that has not been specified and proven to be unit tested.
The question is how can be we leverage InRelease to deploy only the required files as per the example above and not all files in the drop folder?
Install a TFS Proxy in before your build machine, this reduce the net traffic
You will start with a branch strategy like Service Pack, you can read a documentation about in ALM Rangers guidance... And adapt you process template to build just the part of code changed. I think in BRD Lite, another guidance by ALM Rangers, you will found more information.

Best-practice for continuous integration and deployment

Continuous integration concept has just been integrated in my team.
Assume we have an integration branch named Dev.
From it derived 3 branches, one for each specific current project :
Project A
Project B
Project C
First, Teamcity is configured on a dedicated server and it goals is :
Compiles and launches unit and integration tests from versioned sources from each branch including Dev
Then, of course, each project branch (A,B and C) must be tested in a cloned production environment so that UAT can be carried out.
But I wonder what frequency should we deploy on? Everytime a source code changes ?
Should we deploy only Dev that contains mix of the 3 projects after merging each one to it (corresponding to the reality in next production release) or the 3 projects independently?
If Dev is deployed, potentially future changes on Dev must not be taken in account. Indeed,
there might be a new project starting called Project D and that mustn't be part of the next release. So taking Dev for integration (UAT) is risked because deployer could unvoluntary integrate content of Project D and so environment will not reveal the reality of the next release.
Other solution: we're not taking Dev but independently the 3 projects, so must there be 3 cloned production environments in parallel?
If yes, UAT couldn't be reliable since behaviour of integration environment might change very often...
Concept of continuous deployment for UAT isn't clear for me...
Oh boy. You're hitting real world CD problems. Really good questions.
The answer depends a bit on highly tightly coupled the development work is on the various projects.
In my ideal situation for you would be to have a number of "effort" specific test environments. In one case, you could consider a test environment for each project. When there is a completed build of Project A, you push it into Environment A which has the latest approved / production versions for B/C and you can perform basic integration tests there. If they pass, you promote the build to an integration test environment where the latest good A, is deployed along the latest B & C for the same release. When the integration test environment is passing tests, you can promote the contents of it as a release set containing known versions of A, B, & C. That release set would be deployed to any UAT, Staging, or Production environments.
The basic idea is to give each project a degree of isolation so that it can be tested well even if the other projects are (temporarily) badly broken, while getting to full integration tests as quickly as possible. We also want to make sure that whatever we find actually passes integration tests will be promoted together. Picking and choosing project versions to release that haven't been tested together is too risky for my taste.
This is actually a topic I get to talk about quite a lot. If you don't mind, I'll list out a few presentations I've given around these topics.
1) Scaling CI for Parallel Development (co-presented with Chris Lucca of Accurev)
This talks a good about broad strategies for balancing isolation and integration. Much of it assumes the sub-projects are being merged into a common code base, but the principals can be applied to independently built and deployed modules with only a little imagination.
2) Using uDeploy with Jenkins
(registration required)
This is more product focused, but shows almost exactly the idea of using an integration test environment for multiple projects, creating a release set (we call it a "snapshot") and promoting that. Our integration with TeamCity is quite similar, but I think the strategy held in there may be more important
3) Slides visualizing a multi-component pipeline:
http://www.slideshare.net/Urbancode/adapting-deployment-pipelines-for-complex-applications

Local Build Automation?

Working in a team environment, we have a Team Foundation Server that also contains a Team Build component. It is configured to automatically build all projects and solutions at specific times or on request.
We develop a product that is built with several solultions that depend on eachother. When things have been changed in one solution, it has to be rebuilt locally manually in both debug and release mode so that changes take effect in another solution that depends on it.
Also when a developer retrieves all sources the first time, he has to build all solutions manually in the correct order to get a working environment.
What is the best way to automate things like this? Create .cmd files that trigger the correct msbuild files? Using a program such as CruiseControl.NET?
What do you people do to maintain a clean local development environment?
What I did for our Team was to provide a Visual Studio Solution which contains all projects. Then I created a simple .cmd file which uses the commmandline tools of Visual Studio to build this solution with their respective debug/release/profile configurations. This is a one step build solution that can be used from every engineering machine.
The next level is the continuous integration system that is setup to check for changes every 15 min and start a build if there are changes in the VCS. I'm using hudson as our CI system. The CI system is used to build the native projects, the java projects as well as the flex stuff. Since everything can be build from the commandline it's pretty easy to use it with hudson or CruiseControl.NET.