Mercurial - Delivering Isolated Features to Test and Live - testing

We are going to swap to Mercurial.
A piece in our plan that is missing is how to manage branching/merging of build to Test Box (and LiveBox) so isolated features can be mixed with the StableRelease and built to TestBox.
For instance, it seems the predominant usage is to have
DefaultStableBanch
TestBanch
FeatureABranch
FeatureBBranch
Development on FeatureA and FeatureB, will happen at the same time. It looks like predominant usage is to have cloned repositories with branches for the above.
Scenario 1 : If we build to test, we would merge LiveCode+FeatureA+FeatureB. If all goes well then we can merge the changesets to upstream to DefaultStable branch and build to LiveBox with FeatureA and FeatureB. job done.
Scenario 2 : If we build to test, we would merge LiveCode+FeatureA+FeatureB, and QA shows there is a problem with FeatureB. We do not want to build FeatureB anymore. We do want to progress FeatureA. We want to re-test with FeatureA on it's own and let QA pass that. Then release that to Live and hence business agility.
Questions :
If FeatureB fails QA , we need to take out FeatureB changeset nodes from the Test Branch, build to TesBox again and then hopefully then merge upstream to DefaultStable branch to LiveBox.
What is the best way of removing the FeatureB changeset nodes, from the TestBranch, since 1. we need more dev on FeatureB, and the FeatureB nodeset is not finsihed.
2. We need to isolate DefaultStable+FeatureABranch and build that to test
How are other people managing this ?

There are a lot of great writeups up Mercurial workflows, including:
http://stevelosh.com/blog/2010/02/mercurial-workflows-branch-as-needed/
http://stevelosh.com/blog/2010/05/mercurial-workflows-stable-default/
https://www.mercurial-scm.org/wiki/StandardBranching
All of those use Named Branches very minimally -- definitely not one per feature, which with clones as branches sounds like the work mode you (and I) prefer.
Hitting your specific question, if the combination LiveCode+FeatureA+FeatureB is failing tests the the best way to handle it is to just keep repairing FeatureB and then merging those changes down into FeatureA + Feature B. However, before you get to that stage it's a good idea to have QA hit LiveCode+FeatureA and LiveCode+FeatureB separately too, it's slightly more work for them (more test targets) but having each feature in isolation helps find the cause of the defect more quickly.
Once LiveCode+FeatureA and LiveCode+FeatureB are passing QA, then you merge them into LiveCode+FeatureA+FeatureB and if that still passes tests merge the whole thing into DefaultStable. There should be no need to remove FeatureB from a LiveCode+FeatureA+FeatureB because you can always just create a new clone of LiveCode and merge in only FeatureA if you want it.
Here's a great writeup of a Mercurial (Kiln) based QA/release process:
http://blog.bitquabit.com/2011/03/10/when-things-go-well/

Create FeatureA and FeatureB in feature clone branches from stable. Test is merely a temporary area for QA/Test to work from, so I would treat it as 'throwaway' from day one.
When FeatureA and FeatureB are developed enough, create a clone of either one, and pull the other into QA/Test. Do the build for QA, and when they provide feedback, make appropriate changes to FeatureB.
If FeatureA is acceptable for promotion, pull it into/push it to Stable, and merge into stable.
Is that clearer than my original post?

Related

How do I manually remove old release builds from an expired/deleted plan branch in Bamboo?

I use Bamboo regularly as a QA tester to deploy pull requests and feature branches/release branches, but I'm not a developer and have a layman's understanding of how it works.
Our Bamboo configuration is set up to remove inactive branches after a certain amount of time (2 weeks) which happens pretty regularly with longer-term projects, unfortunately. (When that happens, I do know how to configure a new plan and run a new build.) Often, with these larger projects, they've been deployed manually many times over the course of the project, resulting in a large list of possible "release" versions when I go to "Promote existing release to this environment."
Now, I have a brand-new build of a brand-new plan for a project I've been working on off and on for a year and I would like to delete all these old builds (releases?) that show up in the dropdown when I want to just deploy the current version of the current new build, but I can't figure out where to do it (neither can the devs I've asked, but it's NBD to them, whereas this is a constant annoyance for me).
All the advice I can find online says things like "all builds are automatically deleted when the branch expires ...." and that doesn't seem to be true, because these are definitely from old expired plan branches. They also explain how to delete things manually .... from an existing plan branch, which I don't have, because the older plan branches expired and were removed.
Am I using the wrong terminology here and these aren't "builds" and there's a separate way to delete them? Do we have a setup that's failing to delete them when it should? Do devs need to do something different with their branches? I obviously don't have access to global settings but I could put in a request if that's what needs to change.
To be clear, I'm talking about going to deployment preview, selecting "promote existing release to this environment," entering in the jira number/beginning of the branch name, and seeing a million of these (which all look identical because our branch names are hella long):
deployment preview screenshot
I have read through all the Bamboo documentation relating to plans, builds, branches, and deployment, and Googled various combinations of relevant keywords and haven't found a solution. I've also asked devs I work with and they don't know either.

Is there a way to make Gitlab CI run only when I commit an actual file?

New to Gitlab CI/CD.
What is the proper construct to use in my .gitlab-ci.yml file to ensure that my validation job runs only when a "real" checkin happens?
What I mean is, I observe that the moment I create a merge request, say—which of course creates a new branch—the CI/CD process runs. That is, the branch creation itself, despite the fact that no files have changed, causes the .gitlab-ci.yml file to be processed and pipelines to be kicked off.
Ideally I'd only want this sort of thing to happen when there is actually a change to a file, or a file addition, etc.—in common-sense terms, I don't want CI/CD running on silly operations that don't actually really change the state of the software under development.
I'm passably familiar with except and only, but these don't seem to be able to limit things the way I want. Am I missing a fundamental category or recipe?
I'm afraid what you ask is not possible within Gitlab CI.
There could be a way to use the CI_COMMIT_SHA predefined variable since that will be the same in your new branch compared to your source branch.
Still, the pipeline will run before it can determine or compare SHA's in a custom script or condition.
Gitlab runs pipelines for branches or tags, not commits. Pushing to a repo triggers a pipeline, branching is in fact pushing a change to the repo.

Separating building and testing jobs in Jenkins

I have a build job which takes a parameter (say which branch to build) that, when it completes triggers a testing job (actually several jobs) which does some stuff like download a bunch of test data and checks that the new version is works with the test data.
My problem is that I can't seem to figure out a way to show the test results in a sensible way. If I just use one testing job then the test results for "stable" and "dodgy-future-branch" get mixed up which isn't what I want and if I create a separate testing job for each branch that the build job understands it quickly becomes unmanageable because of combinatorial explosion (say 6 branches and 6 different types of testing mean I need 36 testing jobs and then when I want to make a change, say to save more builds, then I need to update all 36 by hand)
I've been looking at Job Generator Plugin and ez-templates in the hope that I might be able to create and manage just the templates for the testing jobs and have the actual jobs be created / updated on the fly. I can't shake the feeling that this is so hard because my basic model is wrong. Is it just that the separation of the building and testing jobs like this is not recommended or is there some other method to allow the filtering of test results for a job based on build parameters that I haven't found yet?
I would define a set of simple use cases:
Check in on development branch triggers build
Successful build triggers UpdateBuildPage
Successful build of development triggers IntegrationTest
Successful IntegrationTest triggers LoadTest
Successful IntegrationTest triggers UpdateTestPage
Successful LoadTest triggers UpdateTestPage
etc.
So especially I wouldn't look into all jenkins job results for overviews, but create a web page or something like that.
I wouldn't expect the full matrix of build/tests, and the combinations that are used will become clear from the use cases.

TeamCity: Managing deployment dependencies for acceptance tests?

I'm trying to configure a set of build configurations in TeamCity 6 and am trying to model a specific requirement in the cleanest possible manner way enabled by TeamCity.
I have a set of acceptance tests (around 4-8 suites of tests grouped by the functional area of the system they pertain to) that I wish to run in parallel (I'll model them as build configurations so they can be distributed across a set of agents).
From my initial research, it seems that having a AcceptanceTests meta-build config that pulls in the set of individual Acceptance test configs via Snapshot dependencies should do the trick. Then all I have to do is say that my Commit build config should trigger AcceptanceTests and they'll all get pulled in. So, lets say I also have AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC
So far, so good (I know I could also turn it around the other way and cause the Commit config to trigger AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC - problem there is I need to manually aggregate the results to determine the overall success of the acceptance tests as a whole).
The complicating bit is that while AcceptanceSuiteC just needs some Commit artifacts and can then live on it's own, AcceptanceSuiteA and AcceptanceSuiteB need to:
DeploySite (lets say it takes 2 minutes and I cant afford to spin up a completely isolated one just for this run)
Run tests against the deployed site
The problem is that I need to be able to ensure that:
the website only gets configured once
The website does not get clobbered while the two suites are running
If I set up DeploySite as a build config and have AcceptanceSuiteA and AcceptanceSuiteB pull it in as a snapshot dependency, AFAICT:
a subsequent or parallel run of AcceptanceSuiteB could trigger another DeploySite which would clobber the deployment that AcceptanceSuiteA and/or AcceptanceSuiteB are in the middle of using.
While I can say Limit the number of simultaneously running builds to force only one to happen at a time, I need to have one at a time and not while the dependent pieces are still running.
Is there a way in TeamCity to model such a hierarchy?
EDIT: Ideas:-
A crap solution is that DeploySite could set a 'in use flag' marker and then have the AcceptanceTests config clear that flag [after AcceptanceSuiteA and AcceptanceSuiteB have completed]. The problem then becomes one of having the next DeploySite down the pipeline wait until said gate has been opened again (Doing a blocking wait within the build, doesnt feel right - I want it to be flagged as 'not yet started' rather than looking like it's taking a long time to do something). However this sort of stuff a flag over here and have this bit check it is the sort of mutable state / flakiness smell I'm trying to get away from.
EDIT 2: if I could programmatically alter the agent configuration, I could set Agent Requirements to require InUse=false and then set the flag when a deploy starts and clear it after the tests have run
Seems you go look on the Jetbrains Devnet and YouTrack tracker first and remember to use the magic word clobber in your search.
Then you install groovy-plug and use the StartBuildPrecondition facility
To use the feature, add system.locks.readLock. or system.locks.writeLock. property to the build configuration.
The build with writeLock will only start when there are no builds running with read or write locks of the same name.
The build with readLock will only start when there are no builds running with write lock of the same name.
therein to manage the fact that the dependent configs 'read' and the DeploySite config 'writes' the shared item.
(This is not a full productised solution hence the tracker item remains open)
EDIT: And I still dont know whether the lock should be under Build Parameters|System Properties and what the exact name format should be, is it locks.writeLock.MYLOCKNAME (i.e., show up in config with reference syntax %system.locks.writeLock.MYLOCKNAME%) ?
Other puzzlers are: how does one manage giving builds triggered by build completion of a writeLock task read access - does the lock get dropped until the next one picks up (which would allow another writer in) - or is it necessary to have something queue up the parent and child dependency at the same time ?

How do I prevent a branch from being pushed to another branch in BZR?

We use a dev-test-prod branching scheme with bzr 2. I'd like to setup a bzr hook on the prod branch that will reject a push from the test branch. Looking at the bzr docs, this looks doable, but I'm kinda surprised that my searches don't turn up any one having done it, at least not via any of the keywords I've thought to search by. I'm hoping someone has already gotten this working and can share their path to success.
My current thought is to use the pre_change_branch_tip hook to check for the presence of a file on the test branch. If it's present, fail the commit.
You may ask, why test for a file, why not just test the branch name? Because I actually need to handle the case where our developers have branched their devel branch, pulled in the shared test branch and are now (erroneously) pushing that test branch to production instead of pushing their feature branch to production. And it seems a billion times easier to look for a file in the new branch than to try to interrogate the sending branch's lineage.
So has someone done this? seen it done? or do I get to venture out into the uncharted wasteland that is hook development with bzr? :)
your approach should work and the plugin will be quite simple: just raise an exception if the file is present.
(For some sample code you can look at a plugin I wrote that can prevent commits on some conditions https://launchpad.net/bzr-text-checker)