I was running a project with test automation on CI/CD and suddenly I got a question.
Is test automation on CI/CD really trustable?
The reason why I get this question is below.
For example, let's assume that I edit some codes on branch A and other codes on branch B and I make PRs.
As I made a test automation, The codes on branch A and branch B would be tested automatically and let's say two tests are passed.
However what if do the two parts of edited codes have no merge conflicts but interact each other and make bugs?
Then how could I trust the test automation and click the merge button?
A good question here, or, at least a situation that I have seen before. Branch A works fine, Branch B works fine but the combination of Branch A and B does not work.
I am going to assume that the automated tests catch the issue. So essentially flow of builds looks like this:
Tip of master passes.
Branch A passes.
Branch B passes.
Branch A is merged and tip of master passes.
Branch B is merged and the tip of master fails.
In this case the automation in CI/CD is trustable. As it accurately reports that the problem was caused by merging B. You can quickly return to a working state by:
Reverting the changes from B (at which point master is now passing).
Then you can get branch B ready to merge again.
Rebasing branch B on the current master.
Fixing the issue.
Testing / Reviewing and Merging branch B.
I am sure you are thinking that if you are the owner of Branch B then this is a lot of work for something you hoped would be caught prior to merge. However, typically in this environment you are trying to protect the state of the master branch. I.E. above all else the master branch is in a working state. The inconvenience to one developer is outweighed by the rest of the team being able to use the working master branch.
In terms of should you trust the automation and click the merge button. I personally avoid clicking the merge button late in the day. I trust that the automation will report issues, however, I don't trust that it is perfect. I prefer to merge in the morning where I have time to react if the build after the merge fails.
Related
I use Bamboo regularly as a QA tester to deploy pull requests and feature branches/release branches, but I'm not a developer and have a layman's understanding of how it works.
Our Bamboo configuration is set up to remove inactive branches after a certain amount of time (2 weeks) which happens pretty regularly with longer-term projects, unfortunately. (When that happens, I do know how to configure a new plan and run a new build.) Often, with these larger projects, they've been deployed manually many times over the course of the project, resulting in a large list of possible "release" versions when I go to "Promote existing release to this environment."
Now, I have a brand-new build of a brand-new plan for a project I've been working on off and on for a year and I would like to delete all these old builds (releases?) that show up in the dropdown when I want to just deploy the current version of the current new build, but I can't figure out where to do it (neither can the devs I've asked, but it's NBD to them, whereas this is a constant annoyance for me).
All the advice I can find online says things like "all builds are automatically deleted when the branch expires ...." and that doesn't seem to be true, because these are definitely from old expired plan branches. They also explain how to delete things manually .... from an existing plan branch, which I don't have, because the older plan branches expired and were removed.
Am I using the wrong terminology here and these aren't "builds" and there's a separate way to delete them? Do we have a setup that's failing to delete them when it should? Do devs need to do something different with their branches? I obviously don't have access to global settings but I could put in a request if that's what needs to change.
To be clear, I'm talking about going to deployment preview, selecting "promote existing release to this environment," entering in the jira number/beginning of the branch name, and seeing a million of these (which all look identical because our branch names are hella long):
deployment preview screenshot
I have read through all the Bamboo documentation relating to plans, builds, branches, and deployment, and Googled various combinations of relevant keywords and haven't found a solution. I've also asked devs I work with and they don't know either.
I am trying to formalize the development workflow and here's the first draft. Welcome suggestions on the process and any tweaks for optimization. I am pretty new when it comes to setting up the processes and would be great to have feedback on it. P.S: We are working on an AWS Serverless application.
Create an issue link in JIRA - is tested by. The link 'is tested by' has no relevance apart from correcting displaying the relation while viewing the story.
Create a new issue type in JIRA - Testcase. This issue type should have some custom fields to fully describe the test case.
For every user story, there will be a set of test cases that are linked to the user story using the Jira linking function. The test cases will be defined by the QA.
The integration/e2e test cases will be written in the same branch as the developer. E2E test cases will be written in a separate branch as it's a separate repository (Open for discussion).
The Test case issue type should also be associated with a workflow that moves from states New => Under Testing => Success/Failure
Additionally, we could consider adding capability in the CI system to automatically move the Test case to Success when the test case passes in the CI. (This should be possible by using JIRA API ). This is completely optional and we will most probably be doing it manually.
When all the Test cases related to a user story to success, The user story can then be moved to Done.
A few points to note:
We will also be using https://marketplace.atlassian.com/apps/1222843/aio-tests-test-management-for-jira for test management and linking.
The QA should be working on the feature branch from day 1 for adding the test cases. Working in the same branch will enable the QA and developer to be always in Sync. This should ensure that the developer is not blocked waiting for the test cases to be completed for the branch to be merged into development.
The feature branch will be reviewed when the pull request is created by the developer. This is to ensure that the review is not pending until the test cases have been developed/passed. This should help with quick feedback.
The focus here is on the "feature-oriented QA" process to ensure the develop branch is always release-ready and that only well-tested code is merged into the develop branch.
A couple of suggestions:
For your final status consider using Closed rather than Success/Failure. Success/Failures are outcomes rather than states. You may have other outcomes like cancelled or duplicate. You can use the Resolved field for the outcomes. You could also create a custom field for Success/Failure and decouple it from both the outcome and status. You ideally do not want your issue jumping back in forth in your workflow. If Failure is a status then you set yourself up for a lot of back and forth
You may also want to consider a status after New Test Creation for the writing of the test case and a status after that such as Ready for Testing. This would allow you to more specifically where the work is in the flow and also capture the amount of time that is spent writing tests, how long test cases wait, and how much time is spent actually executing tests and defect remediation
Consider adding a verification rule to your Story workflow that prevents a story from being closed until all the linked test cases are closed
AIO Tests for Jira, unlike other test management systems, does not clutter Jira, by creating tests as issues. So you need not create an issue type at all.
With it's zero setup time, you can simply start creating tests against stories. It has a workflow from Draft to Published (essentially equaling Ready for Testing).
The AIO Tests jira panel shows the cases associated with the stories and their last execution status to get a glimpse of the testing progress of the story as shown below. It allows everyone from the Product to the Developer to get a glimpse of the testing status.
You can also create testing tasks and get a glimpse of the entire execution cycle in the AIO Tests panel.
It also has a Jenkins plugin + REST APIs to make it part of your CI/CD process.
We are facing challenges with ADF deployment with multiple teams working on same ADF for different use cases. When we move from Dev to Test to Prod we find difficult to deploy the code.
Teams working on their on working branches (feature)
This will go to collab branch and publish
When we move to SIT then we take it to another branch 'integration' branch
If team A push their code onto SIT it will be done form integration branch.
After some days team B also move to SIT, need to merge code to integration branch
But when team A moves to Prod later they also get the code from team B as well
How can we resolve this situation? We really do not want other teams code which is testing phase to move to Prod.
Are we missing something in branching strategy?enter image description here
keep your QA release on say Integration branch
keep your prod release on say master branch..
keep 2 hot fix branches for each team
prod bug fixes will then go from dev branch to.prod thru respective hotfix branches
also same bug fixes will go to integration branch
We are going to swap to Mercurial.
A piece in our plan that is missing is how to manage branching/merging of build to Test Box (and LiveBox) so isolated features can be mixed with the StableRelease and built to TestBox.
For instance, it seems the predominant usage is to have
DefaultStableBanch
TestBanch
FeatureABranch
FeatureBBranch
Development on FeatureA and FeatureB, will happen at the same time. It looks like predominant usage is to have cloned repositories with branches for the above.
Scenario 1 : If we build to test, we would merge LiveCode+FeatureA+FeatureB. If all goes well then we can merge the changesets to upstream to DefaultStable branch and build to LiveBox with FeatureA and FeatureB. job done.
Scenario 2 : If we build to test, we would merge LiveCode+FeatureA+FeatureB, and QA shows there is a problem with FeatureB. We do not want to build FeatureB anymore. We do want to progress FeatureA. We want to re-test with FeatureA on it's own and let QA pass that. Then release that to Live and hence business agility.
Questions :
If FeatureB fails QA , we need to take out FeatureB changeset nodes from the Test Branch, build to TesBox again and then hopefully then merge upstream to DefaultStable branch to LiveBox.
What is the best way of removing the FeatureB changeset nodes, from the TestBranch, since 1. we need more dev on FeatureB, and the FeatureB nodeset is not finsihed.
2. We need to isolate DefaultStable+FeatureABranch and build that to test
How are other people managing this ?
There are a lot of great writeups up Mercurial workflows, including:
http://stevelosh.com/blog/2010/02/mercurial-workflows-branch-as-needed/
http://stevelosh.com/blog/2010/05/mercurial-workflows-stable-default/
https://www.mercurial-scm.org/wiki/StandardBranching
All of those use Named Branches very minimally -- definitely not one per feature, which with clones as branches sounds like the work mode you (and I) prefer.
Hitting your specific question, if the combination LiveCode+FeatureA+FeatureB is failing tests the the best way to handle it is to just keep repairing FeatureB and then merging those changes down into FeatureA + Feature B. However, before you get to that stage it's a good idea to have QA hit LiveCode+FeatureA and LiveCode+FeatureB separately too, it's slightly more work for them (more test targets) but having each feature in isolation helps find the cause of the defect more quickly.
Once LiveCode+FeatureA and LiveCode+FeatureB are passing QA, then you merge them into LiveCode+FeatureA+FeatureB and if that still passes tests merge the whole thing into DefaultStable. There should be no need to remove FeatureB from a LiveCode+FeatureA+FeatureB because you can always just create a new clone of LiveCode and merge in only FeatureA if you want it.
Here's a great writeup of a Mercurial (Kiln) based QA/release process:
http://blog.bitquabit.com/2011/03/10/when-things-go-well/
Create FeatureA and FeatureB in feature clone branches from stable. Test is merely a temporary area for QA/Test to work from, so I would treat it as 'throwaway' from day one.
When FeatureA and FeatureB are developed enough, create a clone of either one, and pull the other into QA/Test. Do the build for QA, and when they provide feedback, make appropriate changes to FeatureB.
If FeatureA is acceptable for promotion, pull it into/push it to Stable, and merge into stable.
Is that clearer than my original post?
We use a dev-test-prod branching scheme with bzr 2. I'd like to setup a bzr hook on the prod branch that will reject a push from the test branch. Looking at the bzr docs, this looks doable, but I'm kinda surprised that my searches don't turn up any one having done it, at least not via any of the keywords I've thought to search by. I'm hoping someone has already gotten this working and can share their path to success.
My current thought is to use the pre_change_branch_tip hook to check for the presence of a file on the test branch. If it's present, fail the commit.
You may ask, why test for a file, why not just test the branch name? Because I actually need to handle the case where our developers have branched their devel branch, pulled in the shared test branch and are now (erroneously) pushing that test branch to production instead of pushing their feature branch to production. And it seems a billion times easier to look for a file in the new branch than to try to interrogate the sending branch's lineage.
So has someone done this? seen it done? or do I get to venture out into the uncharted wasteland that is hook development with bzr? :)
your approach should work and the plugin will be quite simple: just raise an exception if the file is present.
(For some sample code you can look at a plugin I wrote that can prevent commits on some conditions https://launchpad.net/bzr-text-checker)