How do I manually remove old release builds from an expired/deleted plan branch in Bamboo? - bamboo

I use Bamboo regularly as a QA tester to deploy pull requests and feature branches/release branches, but I'm not a developer and have a layman's understanding of how it works.
Our Bamboo configuration is set up to remove inactive branches after a certain amount of time (2 weeks) which happens pretty regularly with longer-term projects, unfortunately. (When that happens, I do know how to configure a new plan and run a new build.) Often, with these larger projects, they've been deployed manually many times over the course of the project, resulting in a large list of possible "release" versions when I go to "Promote existing release to this environment."
Now, I have a brand-new build of a brand-new plan for a project I've been working on off and on for a year and I would like to delete all these old builds (releases?) that show up in the dropdown when I want to just deploy the current version of the current new build, but I can't figure out where to do it (neither can the devs I've asked, but it's NBD to them, whereas this is a constant annoyance for me).
All the advice I can find online says things like "all builds are automatically deleted when the branch expires ...." and that doesn't seem to be true, because these are definitely from old expired plan branches. They also explain how to delete things manually .... from an existing plan branch, which I don't have, because the older plan branches expired and were removed.
Am I using the wrong terminology here and these aren't "builds" and there's a separate way to delete them? Do we have a setup that's failing to delete them when it should? Do devs need to do something different with their branches? I obviously don't have access to global settings but I could put in a request if that's what needs to change.
To be clear, I'm talking about going to deployment preview, selecting "promote existing release to this environment," entering in the jira number/beginning of the branch name, and seeing a million of these (which all look identical because our branch names are hella long):
deployment preview screenshot
I have read through all the Bamboo documentation relating to plans, builds, branches, and deployment, and Googled various combinations of relevant keywords and haven't found a solution. I've also asked devs I work with and they don't know either.

Related

Managing checkouts of same binary file in different branches in Perforce

How to prevent checking out / changing one binary file in different branches of the same content. Situations like: designers have edited some game level (*.umap binary file) in their branch. Programmes changed same file in their branch (for example - added some blueprint on this game level). So now we have three different versions of this file, one in master branch before all changes, one in designers branch without programmes changes, one in programmes branch without designers changes. And now we must merge designers changes and programmes changes into master branch, but we cant.
So the question is - how to organise right this situations? Maybe we can setup perforce to checkout binary file in multiply branches at the same time, or something like this? Thanks...
There are a couple of different ways to think about this.
If you don't want work to continue/begin in one branch, until changes from another branch have been merged in to it, you can use Helix (Perforce) Protections, to give users read-only access to the branch.
This means they will be able to open files for edit, but won't be able to submit their changes.
More info about protections is here:
https://www.perforce.com/perforce/doc.current/manuals/p4sag/chapter.security.html
The protections would need to be changed, when you are ready for work on the other branches to start.
If you want a file to be automatically checked out on all branches, each time someone checks it out on any branch where it exists, you would currently have to script this.
You could do it using the broker and a workspace for every branch, that has a view that just includes the files you want to be checked out everywhere.
The files would then need to be checked out in these workspaces and locked, so that other users can't submit to these branches until the locks are removed.
This is not trivial and may have a performance impact.
You might also be able to do it using pre-command triggers, if your server version is new enough.
If you want to go in to more detail about any of the above, I recommend you contact Perforce Technical Support.
Hope this helps,
Jen.

Checking in pending changes in TFS does not affect source code

I'm an extreme newbie to managing TFS, so please bear with me and know I'll need baby steps. I'll try to be as specific as possible.
I recently inherited an MVC ASP.net website written by a former colleague. Generally he would work directly in the production environment and commit changes as he went along. Obviously that's not good practice, so when I received it I decided to set it up in TFS along with a proper testing and development environment. I created the team project collection, added the existing solution to the collection, set up branching and branch hierarchy, and mapped the work environments. From what I can tell it's set up just like our other site that was configured in TFS before I came on (the person who set it up is long gone).
The issue I'm seeing now is that checking in changes don't seem to be affecting the actual code behind the site. Whether I make the changes in the test branch and then check-in/merge changeset with the production branch, or make the changes directly in production, saving and checking in changes doesn't actually affect the site. If I go into solution explorer and look at the files I just edited, my checked-in changes are not there. Same if I edit a web.config or something, I can then open in up in another text editor and my changes are nowhere to be found.
I followed Microsoft's instructions as closely as I could but clearly I missed something, I just have no idea what.

TFS Builds, Project Files: Orphaned references to files not being pushed are causing endless build errors

We are using TFS 2010 (Visual Studio) for our deployments and have client code projects (.csproj files) and database projects (.dbproj files) We understand that when our Developers add files to our application there is a corresponding reference to these files in the Project file. If I push a changeset from Dev to QA that includes the project file, and the project file contains a reference to a file that's been added that is not in the changeset, I will receive a build error.
Once we started pushing just changesets (as opposed to performing full builds) this quickly became our number one bottleneck in doing TFS builds. I would deploy the database project and there would be 20 errors. The only way I could correct them was to navigate down the entire solution explorer tree and exclude each orphaned reference individually. This has proved far too time consuming and on the advice of our lead programmer we have returned to doing full builds of QA and UAT.
We are in the early stages of this product, and therefore we will be adding many files for some time. We need a better solution for this problem. Neither the manual exclusions nor asking developers to not check in code until it is ready for qa will suffice for us. Has anybody out there had any experience with this problem and if so how did you deal with it? Thanks!
Jon
Pushing changesets to QA selectively is known as cherry picking and causes the sorts of issues that you are experiencing. This is not the recommended practice, instead setup the Qa build so that successful build is part of checkin. This way that if a part of a fix is missed ( as it may be in multiple change sets ) the build will fail and the checkin cannot be performed.
Second have the developers do the second checkin to QA or merge the dev change sets to Qa and have the team lead coordinate changes to project files by watching for changes by turning on "notify changes made by others " or setting a policy for the dev team. Full builds should always be done as partials do not always pick up the complete pick up the dependency graph.

Mercurial - Delivering Isolated Features to Test and Live

We are going to swap to Mercurial.
A piece in our plan that is missing is how to manage branching/merging of build to Test Box (and LiveBox) so isolated features can be mixed with the StableRelease and built to TestBox.
For instance, it seems the predominant usage is to have
DefaultStableBanch
TestBanch
FeatureABranch
FeatureBBranch
Development on FeatureA and FeatureB, will happen at the same time. It looks like predominant usage is to have cloned repositories with branches for the above.
Scenario 1 : If we build to test, we would merge LiveCode+FeatureA+FeatureB. If all goes well then we can merge the changesets to upstream to DefaultStable branch and build to LiveBox with FeatureA and FeatureB. job done.
Scenario 2 : If we build to test, we would merge LiveCode+FeatureA+FeatureB, and QA shows there is a problem with FeatureB. We do not want to build FeatureB anymore. We do want to progress FeatureA. We want to re-test with FeatureA on it's own and let QA pass that. Then release that to Live and hence business agility.
Questions :
If FeatureB fails QA , we need to take out FeatureB changeset nodes from the Test Branch, build to TesBox again and then hopefully then merge upstream to DefaultStable branch to LiveBox.
What is the best way of removing the FeatureB changeset nodes, from the TestBranch, since 1. we need more dev on FeatureB, and the FeatureB nodeset is not finsihed.
2. We need to isolate DefaultStable+FeatureABranch and build that to test
How are other people managing this ?
There are a lot of great writeups up Mercurial workflows, including:
http://stevelosh.com/blog/2010/02/mercurial-workflows-branch-as-needed/
http://stevelosh.com/blog/2010/05/mercurial-workflows-stable-default/
https://www.mercurial-scm.org/wiki/StandardBranching
All of those use Named Branches very minimally -- definitely not one per feature, which with clones as branches sounds like the work mode you (and I) prefer.
Hitting your specific question, if the combination LiveCode+FeatureA+FeatureB is failing tests the the best way to handle it is to just keep repairing FeatureB and then merging those changes down into FeatureA + Feature B. However, before you get to that stage it's a good idea to have QA hit LiveCode+FeatureA and LiveCode+FeatureB separately too, it's slightly more work for them (more test targets) but having each feature in isolation helps find the cause of the defect more quickly.
Once LiveCode+FeatureA and LiveCode+FeatureB are passing QA, then you merge them into LiveCode+FeatureA+FeatureB and if that still passes tests merge the whole thing into DefaultStable. There should be no need to remove FeatureB from a LiveCode+FeatureA+FeatureB because you can always just create a new clone of LiveCode and merge in only FeatureA if you want it.
Here's a great writeup of a Mercurial (Kiln) based QA/release process:
http://blog.bitquabit.com/2011/03/10/when-things-go-well/
Create FeatureA and FeatureB in feature clone branches from stable. Test is merely a temporary area for QA/Test to work from, so I would treat it as 'throwaway' from day one.
When FeatureA and FeatureB are developed enough, create a clone of either one, and pull the other into QA/Test. Do the build for QA, and when they provide feedback, make appropriate changes to FeatureB.
If FeatureA is acceptable for promotion, pull it into/push it to Stable, and merge into stable.
Is that clearer than my original post?

TeamCity: Managing deployment dependencies for acceptance tests?

I'm trying to configure a set of build configurations in TeamCity 6 and am trying to model a specific requirement in the cleanest possible manner way enabled by TeamCity.
I have a set of acceptance tests (around 4-8 suites of tests grouped by the functional area of the system they pertain to) that I wish to run in parallel (I'll model them as build configurations so they can be distributed across a set of agents).
From my initial research, it seems that having a AcceptanceTests meta-build config that pulls in the set of individual Acceptance test configs via Snapshot dependencies should do the trick. Then all I have to do is say that my Commit build config should trigger AcceptanceTests and they'll all get pulled in. So, lets say I also have AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC
So far, so good (I know I could also turn it around the other way and cause the Commit config to trigger AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC - problem there is I need to manually aggregate the results to determine the overall success of the acceptance tests as a whole).
The complicating bit is that while AcceptanceSuiteC just needs some Commit artifacts and can then live on it's own, AcceptanceSuiteA and AcceptanceSuiteB need to:
DeploySite (lets say it takes 2 minutes and I cant afford to spin up a completely isolated one just for this run)
Run tests against the deployed site
The problem is that I need to be able to ensure that:
the website only gets configured once
The website does not get clobbered while the two suites are running
If I set up DeploySite as a build config and have AcceptanceSuiteA and AcceptanceSuiteB pull it in as a snapshot dependency, AFAICT:
a subsequent or parallel run of AcceptanceSuiteB could trigger another DeploySite which would clobber the deployment that AcceptanceSuiteA and/or AcceptanceSuiteB are in the middle of using.
While I can say Limit the number of simultaneously running builds to force only one to happen at a time, I need to have one at a time and not while the dependent pieces are still running.
Is there a way in TeamCity to model such a hierarchy?
EDIT: Ideas:-
A crap solution is that DeploySite could set a 'in use flag' marker and then have the AcceptanceTests config clear that flag [after AcceptanceSuiteA and AcceptanceSuiteB have completed]. The problem then becomes one of having the next DeploySite down the pipeline wait until said gate has been opened again (Doing a blocking wait within the build, doesnt feel right - I want it to be flagged as 'not yet started' rather than looking like it's taking a long time to do something). However this sort of stuff a flag over here and have this bit check it is the sort of mutable state / flakiness smell I'm trying to get away from.
EDIT 2: if I could programmatically alter the agent configuration, I could set Agent Requirements to require InUse=false and then set the flag when a deploy starts and clear it after the tests have run
Seems you go look on the Jetbrains Devnet and YouTrack tracker first and remember to use the magic word clobber in your search.
Then you install groovy-plug and use the StartBuildPrecondition facility
To use the feature, add system.locks.readLock. or system.locks.writeLock. property to the build configuration.
The build with writeLock will only start when there are no builds running with read or write locks of the same name.
The build with readLock will only start when there are no builds running with write lock of the same name.
therein to manage the fact that the dependent configs 'read' and the DeploySite config 'writes' the shared item.
(This is not a full productised solution hence the tracker item remains open)
EDIT: And I still dont know whether the lock should be under Build Parameters|System Properties and what the exact name format should be, is it locks.writeLock.MYLOCKNAME (i.e., show up in config with reference syntax %system.locks.writeLock.MYLOCKNAME%) ?
Other puzzlers are: how does one manage giving builds triggered by build completion of a writeLock task read access - does the lock get dropped until the next one picks up (which would allow another writer in) - or is it necessary to have something queue up the parent and child dependency at the same time ?