We recently upgraded from TFS 2010 to TFS 2015. Everything appears to be fine post-upgrade, but we are getting the error "The item is locked in workspace (null);(null)." on some source control files. It looks like we have some orphaned locks that need to be tracked down and cleaned up, but the tbl_lock database table is not on the database, so the following select query won't work:
select * FROM tbl_Lock l
LEFT JOIN tbl_PendingChange pc
ON l.PendingChangeId = pc.PendingChangeId
WHERE pc.PendingChangeId IS NULL
Does anyone know how to detect and remove these locks in TFS 2015?
I also installed the TFS power tools, and neither Visual Studio 2015 nor the power tools are picking up the locks.
Updated:
BTW, when I run the SELECT query to find out where PendingChangeId is NULL, I get back no rows. I think the trick is the LEFT JOIN. PendingChangeId would be NULL when tbl_Lock also had no record for the PendingChangeId on tbl_PendingChange (and thus the lock was orphaned). So I'd still need to know where the PendingChangeId should normally be joined to in TFS 2015, to identify which files have a lock that is bad. (Or where a workspace no longer exists, which may be another possible source for the issue.)
And I also still need to know how to clean up those bad locks. I'd prefer to do this using the tools, either via the GUI or the command line, but could also do this programmatically either using the API or the TFS Object Model files for TFS 2015.
I really would rather only touch the database directly as a last ditch resort. And I would also rather use tf vc destroy on the item as a last ditch resort as well, since that would wipe out all history on the files.
Update 2
Aha! I think I found a way to identify the files, and it looks like my thinking for what happened may be correct. Unfortunately, I had to probe the database using a READ UNCOMMITTED query to find the information. I couldn't get at this information programmatically or using the tools. (They all showed or acted like the file is not checked out.) The query that I used on TFS 2015 was:
select pc.* from tbl_PendingChange pc
left join tbl_Workspace ws on pc.WorkspaceId = ws.WorkspaceId
where ws.WorkspaceId is null
This returned the three files that have the (null);(null) lock on our database, because the WorkspaceId listed on tbl_PendingChange does not exist anymore on tbl_Workspace.
How did this happen? Our CI server uses temporary TFS workspaces. I think what happened after the upgrade is that our CI server went to check out the file and apply an update to it. (For example, to increment version numbers as part of the build process.) It checked out the file, but failed to apply the update. (Our tools like working with Server workspaces, but it may have ended up with a Local workspace and thus the file was still checked in Local, but checked out on the Server. Thus the change to the file couldn't be applied.) The code that we are using performs a workspace.Delete operation when the process completes, so the workspace was deleted - even though the workspace still had the file checked out! So this created an orphan record on tbl_PendingChange that isn't linked to any Workspace, and thus the file is still locked with pending changes. But the GUI and tools aren't seeing it as such, because they're not realizing the pending change's workspace is non-existent.
So this brings me back around to how do I fix this? If someone knows of a way to get at these orphaned pending changes, I'd appreciate it. I tried using:
TfsTeamProjectCollection tfsTeamProjectCollection = TfsTeamProjectCollectionFactory.GetTeamProjectCollection(new Uri(szProjectUri));
VersionControlServer versionControlServer = tfsTeamProjectCollection.GetService<VersionControlServer>();
string[] items = new[] { ... server item path ... };
PendingSet[] queryPendingSets = versionControlServer.QueryPendingSets(items, RecursionType.None, null, null);
PendingSet[] getPendingSets = versionControlServer.GetPendingSets(items, RecursionType.None);
but these aren't finding the orphans.
Update 3
I finally installed Team Foundation Sidekicks 2015 and gave it a try - status tool specifically, but then other tools. It's finding pending changes, but not the orphaned ones.
You can use Team Foundation Sidekicks to search and undo lock by following steps:
Install the tool and launch it.
Select TFS server to connect.
Select "Tools\Status Sidekick".
Set the "Search criteria" for the information you want.
Click "Search" button.
Select the locked file and click "Unlock lock" button.
You can using below command to undo the pending changes:
tf undo "file_path" /workspace:workspace_name
Or you can just use below command to delete the old workspace
tf workspace /delete /server:your_tfs_server workspace;username
From Visual Studio 2015 GUIļ¼
File -> Source Control -> Advanced -> Workspaces...
In the dialog that came up, check "Show remote workspaces" and the locked workspace came up in the window. Then selected it and click "Remove".
Details about it, please check this blog and more ways to resolve this you can refer the similar question: What do you do if the file in TFS is locked by someone else?
Update:
According to the sql query. It's looking for .PendingChangeId IS NULL . You can use the similarly tbl_PendingChange under collection database. However, it's not a commendatory method. Since operate directly in the TFS database is not recommended.
The following command has cleared up the pending changesets that were orphaned:
tf vc destroy <itemspec> /startcleanup
After running this command, the file was able to be added back to TFS, and the file could be checked in and out and edited as normal. Running the query:
select pc.* from tbl_PendingChange pc
left join tbl_Workspace ws on pc.WorkspaceId = ws.WorkspaceId
where ws.WorkspaceId is null
also showed that the pending changeset record related to this file was gone as well.
Microsoft's documentation on this command can be found at https://msdn.microsoft.com/en-us/library/bb386005.aspx. Before using this command, you should review the documentation carefully and be sure to understand the consequences of using it.
Because this command permanently removes files and potenally all history from TFS - and does so recursively - you need to take precautions and be absolutely certain that you are targeting the command correctly. So before using this command, I would recommend taking the following additional precautions:
Stop all user and external accesses to TFS and any other software that may be running from the machine.
Make sure to run a full backup of TFS and any other databases located on the machine.
If you can, take a snapshot in time of the server.
That way if something goes horribly wrong, you will have one or more points to fall back on.
Related
After (or before) we convert from TFS 2012.2 to TFS 2015.3 (which we have done just fine in a test run) we would like to revert our team project to the standard TFS 2015 Agile process template, and no longer use the customized agile process that we had modified from TFS 2012. We are quite willing to delete all of our work items and start over, but need to keep the team project history and change sets. Anyone know how to do this? Answers to prior questions on this did not address this situation. Thanks.
There is no easy way to do it. Basically the steps require you to use a lot of witadmin commands. Start by deleting any work item types that were added and don't exist in the default template.
Then push the standard work item definition for each work item type.
Then push the categories
Then push the process configuration
Then delete any fields that are no longer used
That should bring you back to the standard template.
An alternative you could try is to use the WitMorph project. You can write a set of rules to migrate your data back into working order.
We've been using TFS since around 2009 when we installed TFS2008. We upgraded to TFS2010 at some point and we've been using it for source control, work item management, builds etc.
Our TFSVersionControl.mdf file is 287,120,000 KB (273GB). We ran some queries and found that our tbl_BuildInformationField table is massive. It has 1,358,430,452 rows which takes up 150,988,624 KB (143GB). We have multiple active products over multiple active builds which more than one solution per build and the solutions aren't free of warning messages.
My questions:
Is it possible to stop MSBuild from spamming the
tbl_BuildInformationField table so much? I.e. only write errors and
general build information and not all the warnings for every
project?
Is there a way to purge or clean up old data from this
table?
Is 273GB for 4 years of TFS use an average size?
Is 143GB for tbl_BuildInformationField a "normal" size?
The table holds the values and output of build process. Take note that build retention policy doesnt actualy delete the build object like everything else in TFS the object is marked deleted and only public visibility and drop location is cleared.
I would suggest if you have retainened same build definitions for very long time (when build definition is deleted the related objects get removed as well) you should query for build info including deleted ones using TFS api, the same api will also alow you to remove them for good. Deleting build definitions probably will not work and will fail with timeout error.
You can consult the following:
http://blogs.msdn.com/b/adamroot/archive/2009/06/12/working-with-deleted-build-data-in-team-foundation-server-2010-beta-1.aspx
I have come across an issue that looks like TFS has permanently deleted a branch and all of its history and is not giving me the ability to interact with any of the changesets that were in that branch. Here is what happened:
I created a new branch(A) off of an existing branch(B).
I used A for a few months.
I merged everything in A back to B.
I deleted A by right clicking on the branch in Source Control Explorer and clicking delete and checked in the change.
[At this point I didn't check to see if A could be undeleted, and didn't notice anything amiss]
2 weeks pass
Now I want to view the history of a file that was merged
I go to the visual studio settings and check the box that shows all deleted items
A is nowhere to be found
I check to see if some other branches that I had deleted in the past were visible, and they are still present.
I look in the change history of the parent directory and I can't even see the changeset from when I deleted A.
I have admin access to the TFS database, but don't understand the schema well enough to search for all "delete" changesets.
I've tried to use the API in Microsoft.TeamFoundation.Client to get more information, but it isn't providing any more records that the TFS history window did
Update
I just ran a a tf destroy command on a test branch to see what the symptoms are, and the symptoms are consistent with what I'm experiencing. I suspect that this branch was destroyed, now my goal is to find out if destroy leaves behind any information about who or when
Further investigation reveals that a team member on a different project had run a cleanup script during the two week period that had invoked the destroy command, accidentally destroying some of our deleted branches. The advice in How to find out who ran the TFS Destroy Command? revealed who it was, and how it had happened.
Why isn't it standard behavior for Accurev to automatically run an "Update" upon opening the program? "Update" updates a user's local sandbox with the latest files from the building/promoted area.
It seems like expected functionality that the most recent files should be synchronized first.
I'm not claiming that it should always update, but curious as to why an auto-Update wouldn't be correct.
Auto-updating could produce some very unwanted results.
Take this scenario: you're in the middle of a development task, but you've made a mistake and need to revert a file that you just modified. So you open AccuRev, but before you have a chance to "revert to most recent version", you are bombarded with 100 files that have been changed upstream including the one you want to revert. You are now forced into the position of resolving all the merge conflicts before your solution will build, including the merge of your (possibly unstable) code in progress.
Requiring the user to manually update keeps a protective 'bubble' around the developer, allowing them to commit (keep) changes within their own workspace without bringing down code changes that could destabilise the work in their sandbox. When the developer gets to a point where his code is ready to share with others, that is the appropriate time to do an update and subsequently build/retest the merged codebase before promoting.
However there is one scenario that I do believe auto-updating could be useful: after a workspace is reparented. i.e. when a developer's workspace is moved from one part of the stream hierarchy to another. Every time we reparent we have to do a little dance:
Accept the confirmation dialog that reminds us (rather verbosely) that we need to update our workspace before we can promote any changes.
Double-click the workspace to view its files.
Wait for AccuRev to do a "Pending" search, to determine whether any file changes are waiting to be committed.
And finally, perform the Update.
Instead of just giving us a confirmation dialog, it would be nice if AccuRev could just ask us if we want to Update immediately.
I guess it depends on preference. I for one wouldn't like the auto-update feature.
Imagine you have a huge project and you don't want to build it every time you start Accurev. But you also can't debug because the source files and debugging info no longer correspond.
I've got a maintenance plan that executes weekly in the off hours. It's always reporting success, but the old backups don't get deleted. I don't want the drive filling up.
DB Server info: SQL Server Standard Edition 9.00.3042.00
There is a "Maintenance Cleanup Task" set to
"Search folder and delete files based on an extension"
and "Delete files based on the age of the file at task run time" is checked and set to 4 weeks.
The only thing I can see is that my backups are each given their own subfolder and that this is not recursive. Am I missing something?
Also: I have seen the issues pre-SP2, but I am running service pack 2.
If you make your backups in subfolders, you have to specify the exact subfolder for deleting.
For example:
You make the backup by choosing the option that says something like "Make one backup file for each database" and check the box that says "Create subfolder for each database".
(I work with a German version of SQL Server, so I translate everything into English myself now)
The specified folder is H:\Backup, so the backups will actually be created in the folder H:\Backup\DatabaseName.
And if you want the Maintenance Cleanup Task to delete the backups via "Delete files based on the age of the file at task run time", you have to specify the folder H:\Backup\DatabaseName, not H:\Backup !!!
This is the mistake that I made when I started using SQL Server 2005 - I put the same folder in both fields, Backup and Cleanup.
My understanding is that you can only include the first level of subfolders. I am assuming that you have that check-box checked already.
Are your backups deeper than the just one level?
Another thought is, do you have one single maintenance plan that you run to delete backups of multiple databases? The reason I ask this is because the way I could see that you would have to do that would be to point it to a folder that was one level higher meaning that your "include first-level subfolders" would not be deep enough.
The way I have mine set up is that the Maintenance Cleanup Task is part of my backup process. So once the backup completes for a specific database the Maintenance Cleanup Task runs on that same database backup files. This allows me to be more specific on the directory so I don't run into the directory structure being too deep. Since I have the criteria set the way I want, items don't get deleted till I am ready for them to be deleted either way.
Tim
Make sure your maintenance plan does not have any errors associated it with. You can check the error log under the SQL Server Agent area in the SQL Server Management Studio. If there are errors during your maintenance plans, then it is probably quitting before it starts to delete the outdated backups.
Another issue could be the "workflow" of the maintenance plan.
If your plan consists of more than one task, you have to connect the tasks with arrows to define the order in which they will run.
Possible issue #1:
You forgot to connect them with arrows. I just tested that - the job runs without any error or warning, but it executes only the first task.
Possible issue #2:
You defined the workflow in a way that the cleanup task will never run. If you connect two tasks with an arrow, you can right-click on the arrow and specify if the second task will run always or only when the first one does/does not run successful (this changes the color of the arrow, possible are red/green/blue). Maybe the backup works, and then the cleanup never runs because it will only run when the backups fails?