What cube deployment changes will force a reprocess? - ssas

Sometimes when I deploy a cube that has been changed (from BIDS), I can continue to browse the existing cube data. Other times, the engine insists I reprocess the data before I can browse the cube.
I can't find a definitive resource showing which changes require a data reprocess and which do not.
SSAS 2008.

In general, you need to process when
adding or editing measures
adding a dimension attribute
editing dimension attribute relationships or order by properties
A more complete list can be found here.

Related

Is there a way to access raw data stored in Youtrack?

In Youtrack reports, you can view the issues by two fields using creation date as y-axis and any other field as x-axis. But when you do that like in this graph you view number of issues that are currently in the state stated in x-axis. For example, if the x-axis is the state, then you will see the current states of the issues that are created in the date intervals of the y-axis. But I also want to see the number of issues in each state in a chronological way. I want to see the states (or some other field) of the issues in May 21, 2021 (not their current states but their states in May 21).
I know that Youtrack keeps the state changes and their dates and many other data like that because in different reports, I can see that the Youtrack uses past data but usually there is no way to download the data of those reports.
I want to access all those raw data. My plan is to create some reports that are not available in Youtrack Reports, using R or Python. Is there a way to access those raw data, or a guideline to access them?
The way to access raw data in YouTrack is through the REST API. For example, you can get the issue's activity data to retrieve the history of changes applied to the issue. This way you can identify how things have changed chronologically.
I can see that the Youtrack uses past data but usually there is no way to download the data of those reports.
Report's data can be accessed via API as well. The report's API endpoint is api/reports, however, it's not documented as it may be subject to change. In this case, we can't guarantee backward compatibility. If you are fine with it, you can still use it. To see the exact request, check the network requests in the browser when loading a report.

Work Item Query Policy to check workitems match on merge

With our TFS 2015 source control we require developers to check-in changes against work items.
However, we've had a couple of instances where a developer has checked in against one work item within our development branch, but then when merging to our QA branch they've checked in the merged changes to a different work item. An example of this is where a bug has been created underneath a PBI, the changes in dev have been checked in against a task under the bug, but then merged to QA against the PBI itself. This causes us issues with traceability.
I've seen that it's possible to add a check-in policy of "Work Item Query Policy". I'm just wondering if there is a way to write a query that will determine if the work item of a check-in after a merge matches the work item of the source changesets? I'm not necessarily after the exact query (though it would be lovely if someone could provide one :) ), really I'm just wondering whether it's possible or not to have a query to do this - i.e. is the information available to queries in TFS?
You can't do this with the existing policies, you'd need to build a custom policy.
So, technically this is possible. You can access the VersionControlServer object through the PendingChanges object:
this.PendingCheckin.PendingChanges.Workspace.VersionControlServer
You can use that to query the history of the branch in question and grab the work items associated to the check-ins in that branch.
You can check the associated workitems to the current workitem:
this.PendingCheckin.WorkItems
You could probably even provide the option to auto-correct by adding the correct work items to the checkin upon validation.
One of my policies provides an example on using the VersionControlServer from a policy.

The amount of data that was returned by a data connection has exceeded the maximum limit that was configured by the server administrator

I have infopath form with SharePoint designer approval workflow.
I am showing some details on that form. but when I have more data in SharePoint List it is giving bellow error
"The amount of data that was returned by a data connection has exceeded the maximum limit that was configured by the server administrator"
I guess Infopath form is getting all the data from the list instead of particular row. can any one please suggest me to filter on current item ?
Resolution: Ensure to follow the below steps.
i. Open the central administration
ii. Go to General Application Settings
iii. InfoPath Form Services
iv. Configure InfoPath Form Services
v. Click on Data Connection Response Size
vi. By default it is 1500 kilobytes
vii. Change the Response Size in kilobytes (Increase the number).
In this case what can be happening is that the amount of data you’re pulling back in your queries has grown to an unmanageable size.
Another possibility is that you’re including additional columns that perhaps you don’t actually need to include – by removing the unnecessary fields this could immediately resolve your issue if you don’t require all the columns.
For office 365 this one going to work.
including additional columns that perhaps you don’t actually need to include – by removing the unnecessary fields this could immediately resolve your issue if you don’t require all the columns.

Some SSAS attribute hierarchies take a long time to resolve

Background
I have developed a SSAS cube that works well for most of my organization's purposes. The primary method of users interacting with this cube is via Excel Pivot Tables.
The Issue
Some of the Pivot Tables created by users have attributes for which their attribute hierarchies take a long time to resolve when the user first clicks on the drop-down box over the field name in the Pivot Table. For instance, the first time a user clicks the drop-down for a field called "Location - County", it takes ~45 seconds for the pop-up box with the list of ~40 counties to show up.
Side Note 1: If I had to guess, it actually seems like SSAS is resolving all field hierarchies in the PT at the same time as the first field that was clicked on because right after this initial resolve, the user can click on any of the fields in the PT and they resolve instantly. Said another way, the first field clicked on always takes ~45 seconds to resolve.
Side Note 2: The next time the user clicks on any of the field drop downs, it resolves almost instantly which I am assuming is because of caching.
The question
Why does it take SSAS so long to resolve some attribute hierarchy lists? It seems to me like this should always be instantaneous?! Doesn't SSAS build all attribute hierarchy lists ahead of time (i.e. during cube processing)?
Many thanks for any light you can shed on this issue for me.
Regards, Jon
1/20/15 Update: Adding Trace Files per Request: Zipped Trace Files. I included all EventClasses just to be sure, but if you need me to run again with only the EventClasses requested below I could.
"Trace of DAR Cubes Project - Test (from service restart).trc" - I restarted AS and immediately refreshed my PT, and recorded the traced events in this file.
"Trace of DAR Cubes Project - Test (after one refresh).trc" - After refreshing the Excel PT as described above, I closed Excel, reopened the same PT, and refreshed again. I expected a much faster refresh, but was surprised with almost the same ~35 second wait. If I keep Excel open between refreshes, it only takes ~2 seconds. This makes me wonder if Excel is caching the results somehow? Which would be weird b/c I thought all the logic and caching took place on the server side.

ssas 2005 deploy project and process options

I have seen many articles but still feel confused about different process options on different objects (dimension, cubes).
In my sample project, there was initially one measure group: Sales. Three dimensions: Date, Product, Branch. I got these deployed and no problem.
Then I added a new measure group to the cube: Sales 1. Sales will 'join' with existing Date and Branch and a new dimension: Code.
When I was processing the cube using process default, I thought it should process newly added measure group (Sales 1) and dimension (Code) only, but why I saw it's also processing existing Sales measure group?
And what's the difference between processing the cube and deploy the project? My understanding is process the project will automatically process the cubes and/or dimensions. Is that correct?
Generally speaking, deploying just updates the definition, i. e. structure of your database, while processing just loads the data. But deploying also always does an unprocess of those objects that structurally changed and those that depend on them, as the data would not match the new structure. Note that e. g. making an object visible or invisible, or in many cases renaming an object is not considered a structural change, but adding or removing a sub object like an attribute or measure is considered a structural change.
And then, there is BIDS trying to make things simple by trying to automatically do things that you did not explicitly trigger: In the default setting, after a deploy, BIDS issues a "Process default" command. You can configure if this should be done or not if you right-click on the project node in Solution Explorer in BIDS, select Properties and then Configuration Properties/Deployment/Processing Options. I tend to keep the default setting for small cubes, and set it to "DO not process" for larger cubes that would take some time to process. But then, I have to be aware that the cube may not contain all data if I did some deployments recently without processing.
And for why your first measure group was processed, maybe you did a small structural change to it without realizing.