Azure DevOps multi CI/CD - automation

I have a following use case:
We have one solution that contains 5-10 different services (.NET Framework Web Apps of various versions) within. We have to setup CI/CD in Azure DevOps to be able to automate the deployments of each services separately (or all services at once). There will be around 5 different environments for each service.
Challenges:
We are trying to avoid having (# of services X # of environments) seperate builds and releases (~50 build/ ~50 release).
We do have to be able to deploy one service alone without others being affected.
We do have to be able to deploy ALL services all at once for mass deployments.
P.S. We are currently using trunk based development but I am thinking about moving to giflow to have branch based triggers as I feel it would be easier to manage in this case.

CI - handled by your build server (e.g. teamcity). Responsibility: Build, Test, Obfuscate, Create Packages and lastly push Packages to nuget server (.net specific). Traditionally besides the app code you also need at least 2 other packages: db migrations, infra migrations.
You build packages once and deploy the exact version everywhere else you want it to go.
https://gist.github.com/leblancmeneses/1d352bb79447cd7a486598c4dc796ef1
This script works in conjunction with https://github.com/leblancmeneses/RobustHaven.DevOps
CD - handled by something like octopus deploy. Responsibly: orchestrate deployment process across your cluster. Octopus pulls packages from nuget server and moves them to what ever environment you want and to whatever machines encompasses that environment.
https://www.robusthaven.com/presentations/DevOps

you dont really need 50 builds, you can use a single build per service (assuming builds for different environments are identical) and build from different branches. technically you can get away with a single release for 50 environments if you create your triggers\phases properly, but that would be a mess, just create a single one for each environment. I cant see how managing 50 environments on a single release is manageable.
when yaml release pipelines arrive, this becomes trivial, right now its not, unfortunatelly.

Related

What use cases of Docker on real projects

I have read what the Docker is but having hard time finding of what are the real scenarios of using Docker?
It would be great to see here your usages.
I'm replicating production environment with it, on commit on project with jenkins after building binaries i deploy there, launch the required daemons and run integration tests, all in a very short time (a few seconds over the time that takes the integration tests). Having no need to boot, and little overhead on memory/cpu/disk is great for that kind of things.
I could extend that use for development (just adding a volume where the code resides to my git repository, at least for scripting languages) to have the production environment with the code im actually editing, at a fraction of what virtualbox would require.
Also needed to test how to integrate some 3rd party code into a production system that modified DB. Cloned the DB in a container, installed the production system in another, launched both and iterated the integration until i did it well, going back to zero to try again in seconds, and faster, cheaper and more scriptable than doing it with VMs+snapshots.
Also run several desktop browser instances on containers, with their own plugins, cookies, data storage and so on separated. The docker repository example for desktop integration is a good start for it, but planning to test subuser to extend this kind of usage.
I've used Docker to implement a virtualized build server which any user could ask to run a build off their personal git branch in our canonical environment.
Each SSH connection made to the server was connected to a new container, ensuring that all builds were isolated from each other (a major pain point in the past), ensuring that the container's state couldn't be corrupted (since changes were all isolated to that single instance), and ensuring that even developers on platforms such as Windows where Docker (and other tools in our canonical build environment) couldn't be run locally would be able to run builds.
We use it for the following uses:
We have a Jenkins Container which we can use to bring up our Jenkins server. We mount the workspace using volumes so we can migrate the server easily just by copying the files and launching the container somewhere else.
We use a Jetty container to easily deploy our war files in our production and development environment.
We use a whole host of other monitoring tools such as Uptime which we have containers for so that we can bring them up and down on various hosts with a single command.
I use docker to build and test our software on several different Linux distributions (RHEL 4/5/6/7, Ubuntu 12.04, 14.04).
Docker makes it easy and fast to create minimalistic and consistent build environments.
Docker gives you the benefits that other virtualization solutions give you to a fraction of the recourse needed.

Best-practice for continuous integration and deployment

Continuous integration concept has just been integrated in my team.
Assume we have an integration branch named Dev.
From it derived 3 branches, one for each specific current project :
Project A
Project B
Project C
First, Teamcity is configured on a dedicated server and it goals is :
Compiles and launches unit and integration tests from versioned sources from each branch including Dev
Then, of course, each project branch (A,B and C) must be tested in a cloned production environment so that UAT can be carried out.
But I wonder what frequency should we deploy on? Everytime a source code changes ?
Should we deploy only Dev that contains mix of the 3 projects after merging each one to it (corresponding to the reality in next production release) or the 3 projects independently?
If Dev is deployed, potentially future changes on Dev must not be taken in account. Indeed,
there might be a new project starting called Project D and that mustn't be part of the next release. So taking Dev for integration (UAT) is risked because deployer could unvoluntary integrate content of Project D and so environment will not reveal the reality of the next release.
Other solution: we're not taking Dev but independently the 3 projects, so must there be 3 cloned production environments in parallel?
If yes, UAT couldn't be reliable since behaviour of integration environment might change very often...
Concept of continuous deployment for UAT isn't clear for me...
Oh boy. You're hitting real world CD problems. Really good questions.
The answer depends a bit on highly tightly coupled the development work is on the various projects.
In my ideal situation for you would be to have a number of "effort" specific test environments. In one case, you could consider a test environment for each project. When there is a completed build of Project A, you push it into Environment A which has the latest approved / production versions for B/C and you can perform basic integration tests there. If they pass, you promote the build to an integration test environment where the latest good A, is deployed along the latest B & C for the same release. When the integration test environment is passing tests, you can promote the contents of it as a release set containing known versions of A, B, & C. That release set would be deployed to any UAT, Staging, or Production environments.
The basic idea is to give each project a degree of isolation so that it can be tested well even if the other projects are (temporarily) badly broken, while getting to full integration tests as quickly as possible. We also want to make sure that whatever we find actually passes integration tests will be promoted together. Picking and choosing project versions to release that haven't been tested together is too risky for my taste.
This is actually a topic I get to talk about quite a lot. If you don't mind, I'll list out a few presentations I've given around these topics.
1) Scaling CI for Parallel Development (co-presented with Chris Lucca of Accurev)
This talks a good about broad strategies for balancing isolation and integration. Much of it assumes the sub-projects are being merged into a common code base, but the principals can be applied to independently built and deployed modules with only a little imagination.
2) Using uDeploy with Jenkins
(registration required)
This is more product focused, but shows almost exactly the idea of using an integration test environment for multiple projects, creating a release set (we call it a "snapshot") and promoting that. Our integration with TeamCity is quite similar, but I think the strategy held in there may be more important
3) Slides visualizing a multi-component pipeline:
http://www.slideshare.net/Urbancode/adapting-deployment-pipelines-for-complex-applications

Recommendations for Continuous integration for Mercurial/Kiln + MSBuild + MSTest

We have our source code stored in Kiln/Mercurial repositories; we use MSBuild to build our product and we have Unit Tests that utilize MSTest (Visual Studio Unit Tests).
What solutions exist to implement a continuous integration machine (i.e. Build machine).
The requirements for this are:
A build should be kicked of when necessary (i.e. code has changed in the Repositories we care about)
Before the actual build, the latest version of the source code must be acquired from the repository we are building from
The build must build the entire product
The build must build all Unit Tests
The build must execute all unit tests
A summary of success/failure must be sent out after the build has finished; this must include information about the build itself but also about which Unit Tests failed and which ones succeeded.
The summary must contain which changesets were in this build that were not yet in the previous successful (!) build
The system must be configurable so that it can build from multiple branches(/Repositories).
Ideally, this system would run on a single box (our product isn't that big) without any server components.
What solutions are currently available? What are their pros/cons? From the list above, what can be done and what cannot be done?
Thanks
TeamCity, from JetBrains, the makers of ReSharp, will do all of that. You will have to configure it for what specifically it means to "build your product", but you can configure up everything you specified with it.
The software can alert you to failed builds, even down to alerting only the person responsible for checking in code that broke the build. It even comes with handy web pages you can view to see only your own changes, which builds they've been through successfully, which ones are pending, and which ones are currently being executed.
Since it is a distributed product, you can make it grow with your organization and product. If at some point you discover that you're waiting for the build to complete too much, because a lot of builds are being queued up, you can add more build agents. The build agents are basically separate client programs you install on additional machines, that execute the actual build configurations.
It comes in two flavors, the professional version and the enterprise version. The professional version is free, can contain up to 20 build configurations, 20 users, and 3 build agents. The enterprise version has unlimited users and build configurations, and you can also use LDAP based security (think domain verified users.) There's also some other bonuses from the enterprise version. You can also buy licenses for more build agents if you need more than the initial 3.
Now, if "no server components" means you don't want it to act like a web server, you're going to be hard pressed to find something that will react to your commits.
However, if you mean that you don't want to have to install a server OS, then TeamCity can work on workstation versions of Windows as well. That isn't to say that you shouldn't consider setting up a proper server for it, but it will run on a workstation if that is what you require.
Our product BuildMaster does all of the things you listed by design and there is a free, somewhat limited edition (e.g. you can only have a limited number of issue tracking providers integrate with it, the database change script packaging tool isn't included in the free version, etc.) for 5 users or fewer.
What you've described is the basics of a CI Tool, so every CI Tool should be OK.
I use Cruise Control.NET but it is bugged with Mercurial and is not very straightforward at first glance. I am nevertheless happy with it. Other tools that come in my mind are Hudson, Team Build (from TFS) and TeamCity.
I have not tried other tools but you can see pros/cons here :
TeamCity vs CC.net
Hudson vs CC.net, Link 1 and Link 2
CC.net vs TFS
EDIT : I forgot to mention that Hudson and Cruise Control.net are Open Source project, you can easily write plugins and patches to your install.
EDIT² : Mercurial bugs seem to be fixed in the upcoming 1.6 version of ccnet (changes commited to the trunk this week).
There's always BuildBot which I like (and have contributed some code to ). It's fairly easy to set-up and run on any OS, and to do simple tasks like that you say, and remarkably flexible if you need it.
What you might find missing is batteries-included log-scrapers and/or report generators that other more commercial CI-servers comes with, especially for Enterprise-y frameworks.
It scales pretty well too, Mozilla and Chromium use it, amongst others.

What can CruiseControl .net (or any CI server) do that MSBuild or NAnt can't?

I ask this question because I find the the community contributions to the various build engines (like MSBuild and NAnt) do include all the tasks that promote for CI servers, like getting versions from source control, cleaning folders, changing build numbers, sending emails, etc...
Is it only because it "listens" to the changes happens on the source control repository? what else am I missing?
Grzegorz Oledzki linked a good resource for finding the differences between multiple CI solutions, but it should be noted that the intent of MSBuild is to specifically turn code into binary and is used by CI software to build the source. It's true that it can do other things but most of its tasks lie closely within that realm.
In addition to what you mentioned about listening to the repo, some CI servers can do all kinds of things like^1:
multi-agent building (not just multi-core, msbuild can do that, but multi-machine)
monitoring build status
notifications (e-mail/sms/rss/whatnot)
assigning blame for broken builds
administrative features
supporting XFDs (extreme feedback devices)
automated deployment
And generally all from a handy UI.
1 Not all CI software will have all of these features, it is by no means meant to be exhaustive and there is some overlap.
I believe CI (Continuous Integration) feature matrix will answer all your questions about particular CI providers and their capabilities.
Wow there are just so many answers to this. As for what a CI system can do that a Build Script can't do other than listen to your Version Control System... Well for starters systems like TeamCity can let you first test your code on the build server and then check it in if it passes all the tests for starters.
I highly recommend using a CI server but I prefer to keep all of the build logic in a MSBuild file and all of the who to notify when it fails etc. in the CI server. Keeping the logic in the Build file helps you to reproduce the build on your own machine and makes it simple to set up new projects in the CI server or to change how the CI server builds the project

Local Build Automation?

Working in a team environment, we have a Team Foundation Server that also contains a Team Build component. It is configured to automatically build all projects and solutions at specific times or on request.
We develop a product that is built with several solultions that depend on eachother. When things have been changed in one solution, it has to be rebuilt locally manually in both debug and release mode so that changes take effect in another solution that depends on it.
Also when a developer retrieves all sources the first time, he has to build all solutions manually in the correct order to get a working environment.
What is the best way to automate things like this? Create .cmd files that trigger the correct msbuild files? Using a program such as CruiseControl.NET?
What do you people do to maintain a clean local development environment?
What I did for our Team was to provide a Visual Studio Solution which contains all projects. Then I created a simple .cmd file which uses the commmandline tools of Visual Studio to build this solution with their respective debug/release/profile configurations. This is a one step build solution that can be used from every engineering machine.
The next level is the continuous integration system that is setup to check for changes every 15 min and start a build if there are changes in the VCS. I'm using hudson as our CI system. The CI system is used to build the native projects, the java projects as well as the flex stuff. Since everything can be build from the commandline it's pretty easy to use it with hudson or CruiseControl.NET.