In current cube, I have a calculated measure of average investment dollars. Now I want to create a range dimension table dynamically based on different amount for every department. The table would be something looks like this:
Dim_DollarRange
ID MinRange MaxRange Description
1 1 2 1-2
2 3 5 3-5
3 6 9 6-9
4 10 14 10-14
So basically there are two questions:
1) How to set up dimension table based on measures in cube dynamically?
2) How to look up in range dimension in SSAS?
I'm new to SSAS, thx for any answers or tutorials!
Use the views that feed the Data Source Views to check the related fact data in your source system and filter the resulting dimesion list accordingly. If you do not have a set of views interfacing between your source and cube, you can perform the same within your queries directly, it is just not as clean.
I use this technique to limit long dimension lists to only used values, providing users that directly access the cube (with Excel etc) a precise list of used options within their filters / slicers. It does have the downside of masking possible options from users reports until they are consumed i.e. you will not see Cancelled Orders = 0 until the first cancelled order triggers of its creation.
I am assuming that you want the data to be dynamic, not the creation of Dimenions i.e. you know you require Dimension A, B, C and they relate to Fact Y and Z. If you truely want to dynamically create whole dimensions (dimension name, measure group relationships etc), I do not think this is possible.
Related
I am trying to create a chart showing the 10 highest-performing teams according to their "proportion of target achieved" score.
My dataset comprises every day worked by every individual in my organisation. The data is grouped in a stored procedure by month, team, job role, area of the organisation...
My SSRS report takes this data and sums it at a report level, based on a half-dozen parameters (mainly to the above groups).
The data is presented via a table, showing (for a given person/group/category) the hours worked, actual contact time (time with clients), expected contact time (time they're meant to spend with clients), and the proportion of their target they are achieving (actual contact time / expected contact time). All of this is reported for each of the last 6 months.
I.e.
I wanted to create a bar chart showing the 10 teams with the highest proportion of target achieved values. This variable is calculated in SSRS in order to allow for the data to be more flexible.
SSRS wouldn't let me use that calculation in a chart filter, so I added a denserank (called TeamOrder) for the teams in the stored procedure.
This is where the issue arises.
TeamOrder is used in the filter, and set to <= 10. 10 teams are displayed in the graph, but not the 10 that have values of TeamOrder equal to 1-10. The top couple are right, but in 10th position, for example, it's displaying the 32nd-best team.
Weirdly, when I set the filter to TeamOrder = [value], then it displays the one (correct) team (including the correct number 10 when TeamOrder = 10).
I'm at a complete loss as to what might be happening; any help would be enormously appreciated.
I have dimension with hierarchy A - B and a time dimension.
I have made 3 different filters from that dimension in Performancepoint 2010, to use as cascading filters.
Cascading works fine, but sometimes dimension members are valid depending on the time dimension and cascading filters will give many "empty" members.
Ie. Jan 2010 only B1,B2 and B3 show measures, others members (B4,B5..) show empty.
How i can connect time dimension to cascading filters so it will only show the valid members at that current time?
I got it using NonEmptyCrossjoin - MDX function.
This function returns a set that contains the cross product of one or more sets, excluding empty tuples and tuples without associated fact table data.
I'm quite new to SSAS so bear with me!
I have created a snowflake schema with Members in the Fact table and I have create a distance from club table with DistanceID,Distance,DistanceRange (this is denormalised in SQL Server with distance range appearing multiple times per distance. e.g Distance 1 has a range of 1 - 10 and Distance 2 also has a range of 1 - 10
I have then created a hierarchy with Distance Range at the top and Distance beneath it. This works OK in terms of providing drill down functionality but the ordering is wrong for distance range. It is ordering them by Distance Range as a string so I get 1-10 followed by 100-10 and then 20-30.
How do I tell the Distance Range to order by Distance ID
Not sure if I'm doing it right.
When you are editing your Dimension, click on the attribute DistanceRange and in the properties, there should be an option to 'OrderBy' and 'OrderByAttribute'. Try and use those the get the result you need. Otherwise, you might want to try change the 'Type' in the properties menu and see if that works.
I am currently trying to implement the following scenario on Tabular Mode SSAS, appreciate your support.
We have a fact table of Transactions that is the linked to the customer dimension, and we have a measure called Frequency that shows the number of times the user used his card during the selected period (The fact table is also linked to Date Dimension). What we need to do is create a dimension that would have the frequency groups as follows (For example, 1 to 5, 5 to 10 , 10 to 15 and 15 & Above). The problem here is that I am unable to link the Fact table to this dimension becuase the link between them would be a calculated measure.
Any thoughts?
Thanks and Best Regards
Omar Sultan
If you want to link the fact to a bucket dimension, you are going to have to specify the time granularity. I would suggest that you decide one or more useful periods (day, week, month) and create a facts (or several) to bucket your data at the appropriate grain.
This solution will lose flexibility from your original request, as the user will not be able to dynamically select the time period for the bucket, however they will gain from being able to compare fixed time periods to identify trends over time.
How can I re-use a single complex dataset across a number of tables?
The dataset has a number of computed columns that needs to be reported both in detail and in summary. Here's a very simplified example dataset:
is_food sale_association food_type total_sold total_associations percent_total
1 Before Movie Popcorn 50 3 x BirtMath.safeDivide(...)
0 Before Movie Soda 10 2 x BirtMath.safeDivide(...)
1 During Movie Jujubee 10 1 x BirtMath.safeDivide(...)
0 After Movie Soda 15 2 x BirtMath.safeDivide(...)
From this one dataset, I'd want to create a detailed summary of all food types while rolling up non food (using the 'is_food' column), another summary of all food types, another detailed summary of food with rolled up non-food by sale_association, etc. etc.
The report would also contain a number of percentages (6 in the most complex table) that need to be calculated (some across a row, others across all rows in a given group), all of which can have a zero value for the denominator and so need to be guarded against with safeDivide (which is a PITA to do in the source SQL query which itself is doing aggregation -- checking for divide by zero when both the numerator and denominator are sums leads to hairy queries).
Obviously I can do this by focusing the() SQL query as appropriate, but it seems like a waste of time and effort to create 12 or 15 queries that are very similar when I've already managed to create the monster query for the most detailed table.
What doesn't seem straightforward is how to perform the rollups in a table. I managed to hack something together by hiding rows that would later be summed up (e.g. "is_food == 0" in the example) and then creating custom data bindings that are displayed in a footer row. Not only does it feel like a hack, it also interferes with the ability to naturally order rows. Again, going back to the example, if I was ordering by total_sold and summarizing rows with is_food == 0, the natural order should be Popcorn, Non-food, Jujubee.
There's nothing in the BIRT wiki about this, nor does "BIRT: A Field Guide, 3rd E." really delve into the topic.
This seems like a fairly open-ended question (although I agree that re-using a single dataset makes much more sense than having multiple queries retrieving the same data in slightly different ways). A few general suggestions:
Use the most detailed version of the data required as a common dataset for each BIRT report item (typically BIRT tables)
Where summary-only level reporting is required, add groups to the BIRT table at the desired level, add data items as required to the group headers/footers and delete the detail level row(s) from the BIRT table.
Where detail-level reporting is required in some cases (eg. for food items but not for non-food items), add groups to the BIRT table as above, and set the visibility of the detail row (in Property Editor - Properties - Visibility) to check Hide Element, then specify the appropriate expression to suppress the non-required rows (non-food items, in this example).
Aggregations (ie. summary expressions) can be added to tables by selecting the whole table, selecting the Binding tab within the Property Editor and clicking the Add Aggregation... button.