Bamboo build with local and remote agents - msbuild

I have a .Net WebApi project and continuous integration is setup using Bamboo. I am using spec flow tests and some of the tests are tagged to run on bamboo remote agent as they are slow in nature. Other tests supposed to be run on multiple local agents. I have setup multiple stages in Bamboo build plan as stages get run in parallel with each stage is set to run specifically tagged tests suits.
My question is what is the general practice to setup a Bamboo plan to run on multiple agents (local and remote) and how can I share one MS Build output (dlls and config) across multiple agents.

If you need to split build and test phase then usually you have Build stage with one job which produces artifact with build output.
Then you create another stage and put several jobs there. Jobs may be configured to download produced artifact from Build stage and execute tests against your build.
If you want to run some of your jobs at remote agent, you can add some Job requirement which only remote agent can satisfy.

Related

GitLabCI deploy to one of x environments

We have a pretty big CI Pipeline including E2E Tests. We currently cannot use the GitLab feature of deploying to dynamic environments, because there is currently still manual work necessary for creating an environment. Therefore all e2e tests run against a single "E2E stage". I would like to add more E2E stages manually.
But here is the problem:
How can I deploy to one (currently unused) E2E stage, run the tests (in a different GitLabCI job) and lock the stage for other tests during that time?
I tried using three special runners with different tags ( ["e2e", "e2e-1"], ["e2e", "e2e-2"], ["e2e", "e2e-3"] ), but I could not tell GitLab to run all jobs of a pipeline on one of these runners (chosen randomly or even better which one is currently free) only. If I start the deployment on "e2e-3" and after that "e2e-1" is free for testing, it might choose that one, which then will not have the correct artifact deployed.

Manual Testing in Devops Pipeline

We are currently doing traditional waterfall model where we have manual and automation tests in SIT and UAT environments. We are moving to Agile/Devops and I am working on POC on Devops. Based on my research, Devops is suited for CI and CD meaning the testing is automated and pipeline is automated from Dev to Production. However when we implement, we want to do automatic code deployments in different environments but stop the pipeline to conduct manual QA testing and Manual UAT before the code is signed off for PROD deployment. If I use Jenkins for Devops, is it recommended to stop the pipeline for few days until manual QA is completed and manual approval is done? How is manual testing accounted in Devops implementations? Any insights would be helpful.
CI and CD are engineering practices that enable teams to improve productivity.
And these should be implemented step by step - first implement CI and then CD. So, build pipelines as you mature in the DevOps processes.
For example, leverage Jenkins pipeline to first orchestrate CI pipeline wherein the following is automated-
application build,
unit testing,
code coverage,
The output of this stage are binaries that are deployed in a binary repository like Nexus.
The next step after successful implementation of CI, is CD - the process to auto-deploy artifacts from one environment to another. Consider we need to deploy artifacts (binaries) in QA for testing. You can extend your CI pipeline to perform CD by moving artifacts from DEV to QA systems. And then stop here, since movement to next environment will be done only when manual testing records are approved. This means progressing to next environment will be manually triggered. Hence, while planning to build a CD pipeline, chalk out the essential steps that should be automated and then progress step by step.
Once you are ready with automated tests and tools, you then complete your CD pipeline and automated the movement to artifacts from DEV-QA-NONPROD, etc.
Having a pipeline blocked for days is certainly an anti-pattern. Here is one way to mitigate it -
Separate Continuous Integration (CI) and Continuous Deployment (CD) pipelines.
Have a separate process routing correct artifacts for environments (disclaimer: I'm biased towards the one we provide - https://relizahub.com, since I'm working on it; video how approvals are implemented - https://www.youtube.com/watch?v=PzdZjMby6Is)
So essentially, what happens - you run CI pipeline, which create a deployment artifact. Then you have some approvals (manual and/or automated) which are recorded specifically on this artifact. Then you have a separate deployment pipeline which picks the right artifact and does the deployment. This way all pipelines are running quickly and you don't have to deal with pipeline runs stuck for a long time.

What are the advantages of running specflow selenium tests in release pipeline instead of build pipeline

I was executing specflow selenium tests in azure build pipeline which is working fine. But someone forced me to run these tests in the release pipeline instead of build pipelines by taking artifacts from the build pipeline.
I am not deploying any application to the server or any other machine. My release pipeline only runs the selenium tests.
I am wondering why should I create a release pipeline if I can do this is in build pipeline itself.
Running your Selenium tests in the build pipeline has following disadvantages:
In most cases Selenium tests are much slower than e.g. simple unit tests, which increases the time of the build pipeline. But you want a fast build pipeline to continue with your work.
If your tests are not stable you will break the build which is IMHO a no-go.
But in some cases it makes sense to execute a small set of Selenium tests during the build pipeline (if not covered with other tests).
This makes sense if you have a big product or when the build pipeline takes very long. You don't want to wait a few hours to get a successful build in your release pipeline where all tests fail because some basic functionallity does not work.
In continuous integration, focus is getting an automated good build out with basic build verification tests while continuous deployment is focused heavily on testing and release management flows.
Typically you will run unit tests in your build workflow, and functional tests in your release workflow after your app is deployed (usually to a QA environment).
The official document also recommends running Selenium tests in the release pipeline

Can I configure a Plan Trigger to run a dynamic branch daily?

Problem
Bamboo runs daily integration tests on integration environment. When the bamboo branch is different to integration test environment these tests tend to fail partially. (any new tests in newer branch fail)
What I'm trying to do to solve this?
Use bamboo Command to get the current branch deployed on integration
Use that value somehow to tell bamboo to trigger that specific branch on the current run
Any idea on how I tell bamboo to run the plan on the specified branch?
You can use REST call to add build into queue,
/queue/{projectKey}-{buildKey}?stage&executeAllStages&customRevision
https://docs.atlassian.com/bamboo/REST/5.13.1/#d2e413

Update Hudson build result status after external tests

We use Hudson for our build/CI needs. In addition to unit tests (running during build) I have a staging environment that runs additional integration tests. Basically the build happens and then build artifacts are submitted the external system. I do not wish to block a Hudson build to wait for integration tests (as that locks the station into idle, and prevents it from building anything else). What I want is to update the the result of the build with the result of the external tests (and attach some logs back to the build, if possible).
Now because the staging environment is asynchronous to the build system (i.e. other systems/people can submit tests), Hudson can't be just monitoring what goes on there right after the build. Hudson build just goes into a test queue. So, I need to notify Hudson, it can't be polling something for updates.
Does Hudson support such behaviour, and if so, how can I achieve it?
I would suggest using the hudson users mailing list [1]
[1] http://java.net/projects/hudson/lists/users/archive
To solve the asynchronous wait issue, you can use the build triggering with an authentication token, and use it in a script.
The Build Triggers section has a Trigger Builds Remotely (e.g. from scripts), which when selected allows you to enter an Authentication Token (which has the descriptive text from your question).
You can thus, remotely trigger the build from a script, i.e., have it as part of the integration test script and trigger the build job using this authentication token.
..
You can have a downstream project just for result collection that integrates the results from various tests and report it to the master and aggregate them all. This project can be triggered using the authentication token, or if there is a single integration test job, you can tie it up as a downstream project.