TFS 2015 slow to populate Incoming Requests after update - sql

My organisation recently applied an update to TFS 2015 (14.102.25423.0 according to the 'About' page of the web interface) that resulted in the 'My Work' tab in Visual Studio 2015 taking up to one minute to populate. I played around with the queries and managed to narrow the problem down to population of the 'Incoming Requests' section of that tab. Under the hood, this is executing the following WIQL query.
SELECT [System.Id], [System.Links.LinkType], [System.Title], [System.State], [System.Reason], [System.AssignedTo]
FROM WorkItemLinks
WHERE (Source.[System.TeamProject] = #project and Source.[System.WorkItemType] in group 'Microsoft.CodeReviewRequestCategory' and Source.[System.AssignedTo] <> #me and Source.[Microsoft.VSTS.Common.StateCode] <> '1')
and ([System.Links.LinkType] = 'System.LinkTypes.Hierarchy-Forward')
and (Target.[System.WorkItemType] in group 'Microsoft.CodeReviewResponseCategory' and (Target.[System.AssignedTo] = #me or Target.[Microsoft.VSTS.Common.ReviewedBy] = #me) and Target.[Microsoft.VSTS.Common.StateCode] <> '2')
ORDER BY [System.CreatedDate] desc, [System.Id] mode(MustContain)
I've reproduced the slowness using the TFS REST API described in https://www.visualstudio.com/en-us/docs/integrate/api/wit/wiql (passing the WIQL query above in the body of the POST request).
The following code review selectors are slow to populate: My Code Reviews & Requests, Incoming Requests.
The following code review selectors are fast to populate: My Code Reviews, Recently Finished, Recently Closed.
The problem is occurring for all users, not just my user.
No one on the team has more than a few code reviews open at any one time.
The problem started occurring practically overnight i.e. on Friday the queries were completing in a second or so, on Monday the queries were taking up to a minute.
Our TFS environment is hosted on Windows Server 2012 (non-R2).
Our TFS environment is backed by SQL Server 2012, SP3 (11.0.6020).
The upgrade to TFS2015.3 was completed as per Microsoft instructions and no issues were encountered and there are no messages in the logs to indicate anything is wrong.
Does anybody have any suggestions about what might be causing this slowness and what can be checked in order to narrow the performance problem down further?

The Team Explorer in Visual Studio provides a dropdown selector for specifying which state of code reviews one wants to list. The available choices are:
My Code Reviews and Requests (open)
My Code Reviews (open/mine)
Incoming Requests (open/others)
Recently Closed (closed)
Recently Finished (finished)
( Annotated each entry above with the state and ownership for clarity.)
According to the description of your performance issue, since this is occurring for all users, seems there are a large number of code reviews in your team. When you open the My work tab , the loading of the various code reviews cause a performance issue.
For this situation, you can try this workaroud: switch over to my code reviews in that Team Explorer drop down selector. After this, please double check whether the issue is gone or still exist.

Answering my own question here... My organisation ended up escalating this through Microsoft and eventually found that there was an issue with out of date statistics causing bad query plan generation. The query that was used to retrieve code review details was taking more than 60 seconds each time it was run.
The queries below will most likely make a significant different to performance if you encounter the same problem.
use <collection db name>;
UPDATE STATISTICS [dbo].[tbl_WorkItemCoreLatest] WITH FULLSCAN
use <collection db name>;
UPDATE STATISTICS [dbo].[tbl_WorkItemCustomLatest] WITH FULLSCAN
For reference, there's a duplicate of my original post on Microsoft Connect here: https://connect.microsoft.com/VisualStudio/Feedback/Details/3107261. The comments from Microsoft in this post indicate a number of people were seeing similar behaviour.

Related

Subscribe to a query

Is there a way in native SQL, SQL database specific (i.e. PostGresQL) or another (NoSQL database) to subscribe to query and receive updates when a entry matches the criteria? For example I have the query: SELECT * FROM users WHERE birthday = today() is it possible to receive update when a entry matches the criteria instead of using the so called 'pulling' mechanism? The query can be slightly more complex because this idea is required for a solution which send recurring messages based on the user preferences.
The only database I know that has built-in notifications like this is RebirthDB with a feature called "changefeeds":
They allow clients to receive changes on a table, a single document,
or even the results from a specific query as they happen. Nearly any
ReQL query can be turned into a changefeed.
The only problem is that the database began life as RethinkDB, but the company making it folded in 2016, leaving it to the open-source community. It's still alive as "RebirthDB" on GitHub with active development, but the documentation is just a copy of the old Rethink docs with GitHub notices. They have a website url, but no website. I hope they can keep it alive: it's a great idea.
https://github.com/RebirthDB/docs

How to find Tickets I worked on during a given timespan?

We are using HP ALM 12.53.193 for issue tracking.
For a progress report of our development activity, I am trying to find out what tickets I worked on within the past month. Primarily, this includes tickets for which my user name was added to the Editors field, or that were marked as Fixed or Closed by me in the given timespan.
I am seeing some promising hints in this forum post that uses SQL to query the AUDIT_LOG table. Likewise, this [sqa.se] post suggests an SQL script for an Excel report.
Unfortunately, when I click New Business View Excel Report in ALM, I am asked to upload an Excel file for some reason, so I am not sure how to proceed from there to entering my SQL.
I have also tried creating a report with the New Project Report command (i.e. I am also ok with a non-scripted solution, if that is the way to go in ALM). However, it seems I can only show the filtered contents of single tables (e.g. Defect) there (the "Cross Filter" feature does not let me choose another entity (e.g. the Audit Log) for cross-filtering my defects with).
How can I retrieve that data from ALM?
I am not entirely sure SO is the right site for this question, although I consider HP ALM a "software tool(...) commonly used by programmers". Please migrate if it is deemed a better fit for another site.

Creating listeners with SQL Server AlwaysOn suddenly stopped working

Problem: I created 10 AlwaysOn Availability Groups with SQL Server without a problem. Suddenly, it stopped worked and I kept getting this ONLY on the "create the listener" part:
Msg 19471, Level 16, State 0, Line 9
The WSFC cluster could not bring the Network Name resource with DNS name 'L_MyListener' online. The DNS name may have been taken or have a conflict with existing name services, or the WSFC cluster service may not be running or may be inaccessible. Use a different DNS name to resolve name conflicts, or check the WSFC cluster log for more information.
Sometimes I also got Msg 19476. This was all maddening because one moment I was creating listeners and availability groups, feeling like a guru, and then everything stopped and I lost hours of time.
So how do you solve this? Of course, Microsoft's own suggested text in the above error description was NOT helpful.
Apparently, each listener is really creating a mini "computer" in Active Directory if you look. And... here's the kicker, a domain user can only join a computer to a domain a limited number of times and that default is 10. Who would have thought that adding listeners equates with joining users to a domain!???! Microsoft really should have made this listener thing more intuitive, at least in their description text of possible problems.
Well, on your Domain Controller, open ADSI Edit, configure it the first time to look at your default naming context of your domain, like "DC=yourname..." with the CN= rows below that. Now, right click on the "DC=" line, choose Properties and navigate down to ms-DS-MachineAccountQuota and increase the limit from 10 to something else like 100.
You may need to run "GPUPDATE /FORCE" on the SQL Servers where you want to try again to add the listener. You may also have to clean up the mess it left (i.e. delete and restore the bad attempt at setting up your group and listener) before you try again.
With SQL Server 2016 supposedly going to require each database be in its own group, with its own listener, people will hit this limit of 10 quite easily!
I hope this helps you. If so, please mark this as the answer on the left. Of course, there are other reasons why people may get this error, as in the Microsoft error now but this whole post is for people who had it working just fine and then suddenly it stopped.

Crm2013/15 Online and queries on huge data volumes

I'm working on a couple of million records, as soon as I try to run an advanced find, and put as a criteria a linked entity, the advanced find goes in timeout.
Create custom views on this allows me to filter properly? Anyone knows the proper way of using the advanced find this way? Are there limitations on the out of the box CRM that i should be aware of?
In CRM 2013 - it is possible to add indexes for specific fields by adding the columns to the quick find view for the entity.
You will need to wait for the Indexing Management Job to run (which is run every 24 hours by default) - see http://blogs.msdn.com/b/darrenliu/archive/2014/04/02/crm-2013-maintenance-jobs.aspx.
In previous version of CRM, it was necessary to add the indexes directly to the database - this may be necessary for more complex queries.
was too early to post an answer. The problem that I encountered was related to the OOB advanced find. Looking for example for an account with some related contacts (a really plain search with a linked entity) i had a SQL timeout. Everything was OOB so I was a little bit clueless and I opened a case to Microsoft. They found a bug, if i was changing the sorting the advanced find started to work again. They are still investigating. So wasn't a setting problem but a crm bug.

Automating WebTrends analysis

Every week I access server logs processed by WebTrends (for about 7 profiles) and copy ad clickthrough and visitor information into Excel spreadsheets. A lot of it is just accessing certain sections and finding the right title and then copying the unique visitor information.
I tried using WebTrends' built-in query tool but that is really poorly done (only uses a drag-and-drop system instead of text-based) and it has a maximum number of parameters and maximum length of queries to query with. As far as I know, the tools in WebTrends are not suitable to my purpose of automating the entire web metrics gathering process.
I've gotten access to the raw server logs, but it seems redundant to parse that given that they are already being processed by WebTrends.
To me it seems very scriptable, but how would I go about doing that? Is screen-scraping an option?
I use ODBC for querying metrics and numbers out of webtrends. We even fill a scorecard with all key performance metrics..
Its in German, but maybe the idea helps you: http://www.web-scorecard.net/
Michael
Which version of WebTrends are you using? Unless this is a very old install, there should be options to schedule these reports to be emailed to you, and also to bookmark queries. Let me know which version it is and I can make some recommendations.