TFS 2010 – Each new branch inherits custom groups permissions from main trunk - permissions

When branching from the main trunk in TFS2010 all the custom groups and permissions (as granted in the main trunk), are inherited by the newly created branch. This results in a fair amount of permissions that needs to be cleaned up in the new branch after creation.
(Each branch is a complete copy of the trunk)
I am making use of a script (bunch of tf commands) to set all new permissions on each new branch. As a last resort I am considering expanding this permissions script to manage the cleanup of the unwanted trunk permissions.
I am however hoping to treat the cause and not the symptom.
Is there any way to override this behavior?

I found the solution here: http://tfsbranchpermremoval.codeplex.com.
It involves writing a plugin for TFS and deploying it on the TFS app server. After deployment, no permissions from the Main trunk are inherited by any branches on branching actions.

Related

How do I manually remove old release builds from an expired/deleted plan branch in Bamboo?

I use Bamboo regularly as a QA tester to deploy pull requests and feature branches/release branches, but I'm not a developer and have a layman's understanding of how it works.
Our Bamboo configuration is set up to remove inactive branches after a certain amount of time (2 weeks) which happens pretty regularly with longer-term projects, unfortunately. (When that happens, I do know how to configure a new plan and run a new build.) Often, with these larger projects, they've been deployed manually many times over the course of the project, resulting in a large list of possible "release" versions when I go to "Promote existing release to this environment."
Now, I have a brand-new build of a brand-new plan for a project I've been working on off and on for a year and I would like to delete all these old builds (releases?) that show up in the dropdown when I want to just deploy the current version of the current new build, but I can't figure out where to do it (neither can the devs I've asked, but it's NBD to them, whereas this is a constant annoyance for me).
All the advice I can find online says things like "all builds are automatically deleted when the branch expires ...." and that doesn't seem to be true, because these are definitely from old expired plan branches. They also explain how to delete things manually .... from an existing plan branch, which I don't have, because the older plan branches expired and were removed.
Am I using the wrong terminology here and these aren't "builds" and there's a separate way to delete them? Do we have a setup that's failing to delete them when it should? Do devs need to do something different with their branches? I obviously don't have access to global settings but I could put in a request if that's what needs to change.
To be clear, I'm talking about going to deployment preview, selecting "promote existing release to this environment," entering in the jira number/beginning of the branch name, and seeing a million of these (which all look identical because our branch names are hella long):
deployment preview screenshot
I have read through all the Bamboo documentation relating to plans, builds, branches, and deployment, and Googled various combinations of relevant keywords and haven't found a solution. I've also asked devs I work with and they don't know either.

Using inherited process model for existing collection on Azure DevOps Server 2019

With Azure DevOps Server 2019 RC it is possible to enable inherited process model on new collections (see release notes). Is there any way to use the inherited process model also for existing collections, where no customization on the process was made
Inherited process model is currently only supported for new collections created with Azure DevOps Server 2019 and not for existing collections.
See this Developer Community entry which asks for it.
I added a set of comments on how I hacked my way from an existing XML collection with a set of Projects to the Inherited type.
https://developercommunity.visualstudio.com/content/idea/614232/bring-inherited-process-to-existing-projects-for-a.html
Working as long as a vanilla workflow is applied to an existing XML collection before doing the voodoo thing.
Not exactly an answer for your question but we recently had the same task and I want to share how we handled this. We also wanted to move to the inherited model and we did not want to do any hacking. So we decided to create a new Collection on our Azure Devops Server 2020 with the inherited model and also migrate our tfvc repository to git.
Create the new Collection. Documentation
git-tfs to create a local repository from our tfvc repository and push it
azure-devops-migration-tools to copy all work items from the old collection to the new collection
In the old collection add the ReflectedWorkItemId for every WorkItem look here
In the new collection add the ReflectedWorkItemId for every WorkItem by using the process editor
Pro-Tip: create a full backup of the new collection to revert to this state easily. I had multiple try-error-restores.
You can't migrate shared steps or shared parameters like this, because you can't edit these work item types in the new collection. There is a workaround
We used the WorkItemTrackingProcessor to migrate all Epics/Features/Product Backlog Items/Bugs/Tasks/Test Cases. Then the same processor but with the mentioned workaround for Shared Steps and Shared Parameters.
This processor also migrates the Iterations and Area Paths
Finally we used the TestPlansAndSuitesMigration to migrate the Test Plans and Suites
For speeding up the migration, you can chunk the work items (for example by date or id) and start the migration multiple times.
Our build and release pipelines + task groups were migrated manually by import and export
We migrated the variable groups by using the API
The teams were created manually and we added the default area paths also by hand

Managing checkouts of same binary file in different branches in Perforce

How to prevent checking out / changing one binary file in different branches of the same content. Situations like: designers have edited some game level (*.umap binary file) in their branch. Programmes changed same file in their branch (for example - added some blueprint on this game level). So now we have three different versions of this file, one in master branch before all changes, one in designers branch without programmes changes, one in programmes branch without designers changes. And now we must merge designers changes and programmes changes into master branch, but we cant.
So the question is - how to organise right this situations? Maybe we can setup perforce to checkout binary file in multiply branches at the same time, or something like this? Thanks...
There are a couple of different ways to think about this.
If you don't want work to continue/begin in one branch, until changes from another branch have been merged in to it, you can use Helix (Perforce) Protections, to give users read-only access to the branch.
This means they will be able to open files for edit, but won't be able to submit their changes.
More info about protections is here:
https://www.perforce.com/perforce/doc.current/manuals/p4sag/chapter.security.html
The protections would need to be changed, when you are ready for work on the other branches to start.
If you want a file to be automatically checked out on all branches, each time someone checks it out on any branch where it exists, you would currently have to script this.
You could do it using the broker and a workspace for every branch, that has a view that just includes the files you want to be checked out everywhere.
The files would then need to be checked out in these workspaces and locked, so that other users can't submit to these branches until the locks are removed.
This is not trivial and may have a performance impact.
You might also be able to do it using pre-command triggers, if your server version is new enough.
If you want to go in to more detail about any of the above, I recommend you contact Perforce Technical Support.
Hope this helps,
Jen.

Installation and configuration of DSPACE as Federated repository

We already have dspace installed in various institutions across the country and want to deploy a centralized DSPACE repository as part of the federated repository. Contents from the branch repository will be synced to the centralized repository as soon as they are published. Contents from the centralized repository are then synced back to other branch repositories. Contents are not synced directly between branch repositories but only through the centralized repository.
Please we welcome ideas on how to achieve this as while we have various references including http://www.dlib.org/dlib/july06/tansley/07tansley.html and http://link.springer.com/chapter/10.1007/978-3-642-40276-0_21, we cant seem to make any headway.
For the first part of the puzzle, pulling the contents of your branch repositories into the central one, you could look at this feature:
https://wiki.duraspace.org/display/DSDOC5x/XMLUI+Configuration+and+Customization#XMLUIConfigurationandCustomization-HarvestingItemsfromXMLUIviaOAI-OREorOAI-PMH
The second challenge, having the branch repositories pull back changes/or new items from the central one, is quite another thing.
Unidirectional integrations where one repository is clearly the master record for a particular item is quite straightforward. However, bidirectional updates and managing the different cases and edge cases around changes happening in multiple places to the same items, is a big challenge.
So basically I'm advising you to reconsider your strategy and check if a scenario where one repository is always the master record for a particular item would also fit your use cases.

TFS Builds, Project Files: Orphaned references to files not being pushed are causing endless build errors

We are using TFS 2010 (Visual Studio) for our deployments and have client code projects (.csproj files) and database projects (.dbproj files) We understand that when our Developers add files to our application there is a corresponding reference to these files in the Project file. If I push a changeset from Dev to QA that includes the project file, and the project file contains a reference to a file that's been added that is not in the changeset, I will receive a build error.
Once we started pushing just changesets (as opposed to performing full builds) this quickly became our number one bottleneck in doing TFS builds. I would deploy the database project and there would be 20 errors. The only way I could correct them was to navigate down the entire solution explorer tree and exclude each orphaned reference individually. This has proved far too time consuming and on the advice of our lead programmer we have returned to doing full builds of QA and UAT.
We are in the early stages of this product, and therefore we will be adding many files for some time. We need a better solution for this problem. Neither the manual exclusions nor asking developers to not check in code until it is ready for qa will suffice for us. Has anybody out there had any experience with this problem and if so how did you deal with it? Thanks!
Jon
Pushing changesets to QA selectively is known as cherry picking and causes the sorts of issues that you are experiencing. This is not the recommended practice, instead setup the Qa build so that successful build is part of checkin. This way that if a part of a fix is missed ( as it may be in multiple change sets ) the build will fail and the checkin cannot be performed.
Second have the developers do the second checkin to QA or merge the dev change sets to Qa and have the team lead coordinate changes to project files by watching for changes by turning on "notify changes made by others " or setting a policy for the dev team. Full builds should always be done as partials do not always pick up the complete pick up the dependency graph.