Analysis services: Dynamic Weighted Allocation on different writeback levels - ssas

I'm using an Update Cube Statement to Writeback Values from a frontend (XLCubed) to the Analysis Services Cube.
UPDATE CUBE [Planung]
SET
([Measures].[Kg Anpassung]
,[Datentyp].[Datentyp].[All].[FJ Plan]
, [Produkt.[All].[Element from level 1])
= 77000
USE_WEIGHTED_ALLOCATION
BY
([Measures].[DB1], [Datentyp].[Datentyp].[All].[LJ], [Produkt].[All].currentmember) / ([Measures].[DB1],[Datentyp].[Datentyp].[All].[LJ], [Produkt].[All].currentmember.parent)
The Value (77000) is Allocated by the Values of the Actual Year (LJ) in a different measure. The Weightfactor is calculated by looking at the Product it's writing to divided by its parents value.
With the code above it's possible to Input Values on Base-Elements of the Product-Hierarchy or on level 1 of the product-hierarchy. But it is not possible to write to elements on the level 2 of the product-hierarchy, as you need the parent.parent element of the base element to calculate the weight-factor.
Base Element (Level 0) -> Written Value = Input (handled by Frontend)
Level 1 -> Written Value = Input * (Actual Year / (Actual Year, Product.Parent))
Level 2 -> ?
Is it somehow possible to write a formula which is valid for every
possible level in the writeback hierarchy?
Alternative: Is it possible to read the level of the element my input
is done on?
Best regards
Paul

Related

SSAS MDX Calculated Measure Based on Related Dimension Attribute Value

I have a measure [Measures].[myMeasure] that I would like to create several derivatives of based on the related attribute values.
e.g. if the related [Location].[City].[City].Value = "Austin" then I want the new calculated measure to return the value of [Measures].[myMeasure], otherwise, I want the new calculated measure to return 0.
Also, I need the measure to aggregate correctly meaning sum all of the leaf level values to create a total.
The below works at the leaf level or as long as the current member is set to Austin...
Create Member CurrentCube.[Measures].[NewMeasure] as
iif(
[Location].[City].currentmember = [Location].[City].&[Austin],
[Measures].[myMeasure],
0
);
This has 2 problems.
1 - I don't always have [Location].[City] in context.
2. When multiple cities are selected this return 0.
I'm looking for a solution that would work regardless of whether the related dimension is in context and will roll up by summing the atomic values based on a formula similar to above.
To add more context consider a transaction table with an amount field. I want to convert that amount into measures such as payments, deposits, return, etc... based on the related account.
I don't know the answer but just a couple of general helpers:
1 You should use IS rather than = when comparing to a member
2 You should use null rather than 0 - 0/NULL are effecitvely the same but using 0 will slow things up a lot as the calculation will be fired many more times. (this might help with the second section of your question)
Create Member CurrentCube.[Measures].[NewMeasure] as
iif(
[Location].[City].currentmember IS [Location].[City].&[Austin],
[Measures].[myMeasure],
NULL
);

Histrogram in MDX with icCube

How is it possible to do an dynamic histrogram using MDX ?
For example, our schema is based on web visits, we've the number the sessions and the number of click-outs. We would like to have the number of session with one click-out taking into account that this might depend on other dimensions (country, hour, entry-page...).
To solve this we are going to work with two different concepts. First create a new Hierarchy and afterwards use MDX+.
First we've to create a new dimension, [Histrogram]. This new dimension will contain the defintion of the buckets with two member properties : start-bucket and end-bucket. A pseudo table that looks like
Name start-bucket end-bucket
0-1 0 1
1-2 1 2
2-3 2 3
...
10++ 10 2147483647
This Hierarchy is not linked to the facts and defines for each member two properties defining a bucket.
Let's put this to use in MDX.
Let's assume we've a dimension, [Sessions], and a measure, [click-outs]. First we're going to use the OO features of icCube and create a vector that for each session calculates the number of [click-outs]
-> Vector( [Sessions], [click-outs], EXCLUDEEMPTY )
Vector has a function, hist(start,end), that does exactly what we need and is counting all occurencies between start and end (excluded).
Vector( [Sessions], [click-outs], EXCLUDEEMPTY )->hist(0,1)
Putting this together with our newly created hierarchy allows to automize the calculation for all buckets. The const function ensures the vector is calculated only once as it might be time consuming.
The final MDX looks like (note that both function and calc. members could be created in the schema script, once per schema):
WITH
CONST FUNCTION ClicksBySession() AS Vector( [Sessions], [Measures].[click-outs], EXCLUDEEMPTY )
MEMBER [Session/Clickout] AS ClicksBySession()->hist( [Histogram].currentMember.properties("start-bucket", TYPED) , [Histogram].currentMember.properties("end-bucket", TYPED)
SELECT
{[Session/Clickout] } on 0,
[Histogram].on 1
FROM [clickout]
--where [Geography].[Europe]
And there you've an histrogram that is calculated dynamically that can be easily inserted in a dashboard and reused.

Qlikview calculation of range for frequencies

I am given a task to calculate the frequency of calls across a territory. If the rep called a physician regarding the sale of the product 5 times, then frequency is 5 and HCP count is 1....I generated frequencies from 1 to 124 in my pivot table using a calculated dimension which is working fine. But my concern is :
My manager wants frequencies till 19 in order from 1..2..3..4...5..6.....19...
And from the frequency 21-124 as 20+.
I would be grateful if someone helps me with this.....Eager for the reply....
Use the Class function in the dimension, to split into buckets:
=class(CallId,5)
And the expression:
=count(Distinct CallId)
You can then customize the output by adding parameters:
class( var,10 ) with var = 23 returns '20<=x<30'
class( var,5,'value' ) with var = 23 returns '20<= value <25'
class( var,10,'x',5 ) with var = 23 returns '15<=x<25'
I think you can do this with a calculated dimension.
If your data has one row per physician coming from the load statement below will likely work.
Dimension
- =IF(CallCount<=19,CallCount,'+20')
Expression
- =COUNT(DISTINCT Physician_ID)
Sort
- Numeric Value Ascending
If your data has to be aggregated, more than one call row per provider incoming from the load try above substituting below for the Dimension.
Dimension
- =IF(AGGR(SUM(CallCount), Physician_ID) <=19,AGGR(SUM(CallCount), Physician_ID),'+20')

using scope with calculated member

I have problem in my calculated member. Whenever this member involve in calculation or query it take large time to execute. I am trying to narrow down execution time.
I have to remove IIF condition from the members and start using scope instead.
CREATE Member CurrentCube.[Measures].[AvgAmount] as
IIF(ISLeaf([Customer].[ParentCustomer].currentmember),
[Measures].[Value],
(SUM([CCube^Customer].[ParentCustomer].CURRENTMEMBER.CHILDREN) /
COUNT([Customer].[ParentCustomer].CURRENTMEMBER.CHILDREN))
) ,
Format_String = "#.0000000;-#.0000000;0;0",
Non_Empty_Behavior = [Measures].[Amout];
I have created hierarchy of customer which is [ParentCustomer] here. I want to see avg amount of all the children under the parent customer but when I am looking child level which does not have any children in it should only show the [Measures].[Amout].
Thanks in advance
Regards,
Sam
From your question, I assume you really want to have the average of the children, and not the average of all leaf level descendants. The latter could be implemented as follows:
Create a new measure group on the customer dimension table which has a single measure 'customer count' which would just be implemented as count - or of your customer dimension as a granularity that is finer than a single customer - countdistinct of the customer key or something like this.
Then just define your measure as
CREATE Member CurrentCube.[Measures].[AvgAmount] as
[Measures].[Value] / [Measures].[customer count],
Format_String = "#.0000000;-#.0000000;0;0",
Non_Empty_Behavior = [Measures].[Amout];
This assumes that the aggregation of [Measures].[Value] is defined as sum or one of the semi additive aggregations, but not max or min or something similar.
However, I assume from your question that this is not what you want. Instead you want to see the average of the children at each level. And I assume that [Customer].[ParentCustomer] is a standard user hierarchy and not a parent child hierarchy. Then, the approach suggested in the title, using SCOPE, would work. Let's assume you have three levels in your [Customer].[ParentCustomer] hierarchy:
The (implicitly defined) All level, just containing the All member
level A, built from attribute A of the dimension
level B, which is the leaf level and built from attribute B of the dimension
Then, under similar assumptions about the [Measures].[Value] aggregation, you could define the AvgAmount measure as follows:
// create the measure as it is correct for level B:
CREATE Member CurrentCube.[Measures].[AvgAmount] as
[Measures].[Value],
Format_String = "#.0000000;-#.0000000;0;0",
Non_Empty_Behavior = [Measures].[Amout];
// overwrite the definition for level A:
SCOPE([Customer].[ParentCustomer].[A].Members);
[Measures].[AvgAmount] = [Measures].[Value] / (EXISTING [Customer].[B].[B].Members).Count
END SCOPE;
// overwrite the definition for the ALl level:
SCOPE([Customer].[ParentCustomer].&[All]);
[Measures].[AvgAmount] = [Measures].[Value] / (EXISTING [Customer].[A].[A].Members).Count
END SCOPE;
This approach, using SCOPE, would not work for a parent child hierarchy, buta syou do not write you have one, I just assume you don`t.

Spatial SQL query showing parcels containing centroid of building

I am trying to write a query that selects parcels that contain the centroid of a certain building code (bldg_code = 3).
The parcels are listed in the table "city.zoning" and contains a column for a PIN, geometry, and area of each parcel. The table "buildings" contains a column for bldg_type and bldg_code indicating the building type and its corresponding code. The building type of interest for this query has a bldg_code of 3.
So far I've developed a query that shows parcels that interact with the building type of interest:
select a.*
from city.zoning a, username.buildings b
where b.bldg_code = 3 and sdo_anyinteract(a.geom,b.geom) = 'TRUE';
Any ideas?
You can use SDO_GEOM.SDO_CENTROID (documentation) to find the centroid of a geometry.
Note that the centroid provided by this function is the mathematical centroid only and may not always lie inside the geometry, for example, if your polygon is L shaped. SpatialDB Adviser has a good article on this, but here's a quick illustration:
If this isn't a problem for you and you don't need that level of accuracy, just use the built-in, but if you do consider this to be a problem (as I did in the past), then SpatialDB Adviser has a standalone PL/SQL package that corrrectly calculates centroids.
Depending on your performance needs, you could calculate the centroids on-the-fly and just use them in your query directly, or alternatively, add a centroid column to the table and compute and cache the values with application code (best case) or trigger (worst case).
Your query would look something like this:
SELECT a.*
FROM city.zoning a
JOIN username.buildings b ON sdo_contains(a.geom, b.centroid) = 'TRUE'
WHERE b.bldg_code = 3
Note that this is using SDO_CONTAINS on the basis of the a.geom column being spatially indexed and a new column b.centroid that has been added and populated (note - query not tested). If the zoning geometry is not spatially indexed, then you would need to use SDO_GEOM.RELATE, or index the centroid column and invert the logic to use SDO_INSIDE.