MDX query returns unexpected (null) - ssas

I`ve faced strange problem with MS SSAS 2008 R2 (10.50.4000.0): two MDX queries which I expect to return the same result behave differently:
This query returns correct numbers:
select
[Measures].[Fact Count] on 0
from
[Cube]
where
[Dimension].[Attribute].&[id]
While this one, which is expected to be equivalent to the first query returns (null) from time-to-time (see details below).
select
[Measures].[Fact Count] on 0
from
(
select
[Dimension].[Attribute].&[id] on 0,
from
[Cube]
)
Some details
Problem is not persistent. It appears and disappears randomly (!) on different databases from different physical servers.
We are using incremental data import and non-lazy processing. There is no strict correlation between problem appearance and data imports. But we continue investigation in this direction.
Adition of other members to axis of the subselect fixes the problem i.e. {[Dimension].[Attribute].&[id1], [Dimension].[Attribute].&[id2] on 0} works fine.
Several dimensions are affected. All of them have integer key. Prolbem appears both on visible and hidden dimension attributes.
Addition of extra dimension to the second axis of the subselect fixes the problem for some pairs of dimensions, i.e. filter [Dimension1].[Attribute].[&id] on 0 fails, but filter [Dimension1].[Attribute].[&id] on 0,[Dimension2].[Attribute].[&id] on 1 works.
We have two measure groups with several measures each. All dimensions are related to some (default) measure in first measure group but some dimensions are only related to the second measure group. Problem appers only on the dimensions of the second type.
Does anyone have idea about the reasons of such strange non-determenistic behavior of MS OLAP?
Thanks.

Related

HANA Studio - Calculation view Calculated column not being aggregated correctly

I encounter a problem while I am trying to aggregate (sum) a calculated column which was created in another Aggregation node from another Calculation view.
Calculation View : TEST2
Projection 1 (Plain projection of another query)
Projection1
Aggregation 1 Sum Amount_LC by HKONT and Unique_document_identifier. In the aggregation, a calculated column Clearing_sum is created with the following formular:
Aggregation1
[Question 1] The result of this calculation in raw data preview makes sense for me but the result in Analysis tab seems incorrect. What is the cause of this different output between Analysis and Raw Data?
Result Raw Data
Result Analysis
I thought that it might be the case that, instead of summing up, the analysis uses the formular of Clearing_sum since it is in the same node.
So I tried creating a new Calculation (TEST3) with a projection on this TEST2 (all columns included) and ran to see the output. I still get the same output (correct raw data but incorrect analysis).
Test3
Result Analysis Test3
[QUESTION 2] How can I get my desired result? (e.g. the sum of Clearing_sum for the highlighted row should be 2 according to Raw data tab). I also tried enabling the Client-side aggregation in the Calculated column, but it did not help.
Without the actual models (and not just screenshots) it is hard to tell what the cause of the problem here is.
One possible cause could be that removing the HKONT changed the grouping level of the underlying view that computed SUM(Amount_LC). In turn, this affects the calculation of Clearing_sum.
A way to avoid this is to instruct HANA to not strip those unreferenced columns and to not change the grouping level. To do that, the KEEP FLAG needs to be set for the columns that should stay part of the grouping.
For a more detailed explanation of this flag, check the documentation and/or blog posts like Usage of “Keep Flag”.

SAP DBTech JDBC: [2048]: column store error: search table error: [2724] Olap temporary data size exceeded 31/32 bit limit"

I have a calculation view which is based on other calculation views and joins to bring material Accounts data from different vendors (all joins have 1-1 mapping with target). in the final view I have a calculated column as Formatted_MATERIAL (Material numbers without any leading zeros, used Ltrim() to remove leading zeros.)
Now, when I'm searching Formatted_MATERIAL equal to some specific number it's showing read error (heading). If I'm searching for some range of material it's giving results.
For example, if I search for material (500098), it's present in following query results
select "Formatted_MATERIAL"
FROM "_SYS_BIC"."CA_REPORTS_001_VK"
where "Formatted_MATERIAL" between 5000000 and 6000000
order by "Formatted_MATERIAL"
but no results for
select "Formatted_MATERIAL"
FROM "_SYS_BIC"."CA_REPORTS_001_VK"
where "Formatted_MATERIAL" = 5000098
The cause of the error is that during some processing step in one of the views you're using, the intermediate result set exceeds 2 billion records.
Based on my experience with typical HANA use cases (that would mostly be use cases in relation to SAP products) I am pretty sure that the way these underlying views have been modelled is not really right. Whenever you try and join or aggregate an intermediate result set of two billion records at once, chances are that important operations like filtering, projection and aggregation should have been done much earlier in the model.
Of course, without seeing the model(s) and the execution details (use PlanViz for this) and knowing with HANA version you're using, there is nothing we can say about how to solve this issue.

How many Axis can we use in MDX practically?

I heard about there are around 128 Axis in MDX.
AXIS(0) or simply 0 – Columns
AXIS(1) or simply 1 – Rows
AXIS(2) or simply 2 – Pages
AXIS(3) or simply 3 – Sections
……….
……….
So far I have used only two of them, Column (0) & Row (1).
I am just curious about
how,
where
when or why
can I use other MDX Axis ?
As SQL SSMS only supports two Axis, If I am not wrong.
Thanks.
How :
select ... on 0, ... on 1, ... on 2 and so on .... from [cube]
Where :
Any client that will not crash with unexpected result format ;-)
When / Why :
A client could take advantage of several axis for rendering the result in 3D using 3 axis. Even if the the client does not render the result in 3D, it might be interesting to ask the server to return the result split over 3 axis for ad-hoc (or easier) processing.
I do not know of any standard client that supports this.
But a typical application that comes to mind: Some years ago (before I was working with Analysis Services), we had a client requiring one and the same report for ten countries and five markets on fifty PowerPoint slides. If we had used Analysis Services at that time, we might have written a custom client application that uses a four dimensional report and thus can get the data to be put into all fifty PowerPoint slides with a single MDX query.
You need not think of OLAP dimensions as dimensions in space. You also can think of them (as the name aliases suggest) as e. g. pages and chapters.

optimizing MDX Calculated Measure with LINKMEMBER

I've two kind of reports (send reports and receive reports), and two role-playing dimensions (senders and receivers). I'm trying to compare amounts from each reports for one organization by it's senders/receivers.
My current query is:
with member [Measures].[SentAmount] as ( [Receiver].[Code].&[XXX],[Measures].[Sent] )
member [Measures].[ReceivedAmount_Temp] as
(
[Sender].[Code].&[XXX],
[Measures].[Received]
)
member [Measures].[ReceivedAmount] as
(
LINKMEMBER
(
[Sender].[Code].CURRENTMEMBER,[Receiver].[Code]
),
root([Sender]),
[Measures].[ReceivedAmount_Temp]
)
SELECT
{
[Measures].[SentAmount],
[Measures].[ReceivedAmount]
} ON COLUMNS,
NON EMPTY
{ (
[Sender].[Code].[Code].ALLMEMBERS
*[Sender].[Name].[Name].ALLMEMBERS
)} FROM MyCube
Result is correct but execution time is very long. Especially on real query where I've 15-20 measures.
Is it possible to optimize this query in any way?
This is not a complete solution, but an approach: What about using a "role playing fact table": You would have two copies of the fact table, named, say "Sent" and "Received". Both would reference the same dimension "Customer" (from the Received fact table, as the receiver, and from the Sent fact table as the sender). The other party (the sender for the Received fact and the receiver from the Sent fact) would reference the customer table as well, this time with a role playing dimension.
Technically, you could implement this via views or named queries in the DSV, as the BIDS GUI does not allow to use one fact table for two measure groups.
The advantage would be that you do not need any calculated measures for your query, which are probably the main reason for the bad performance.
Try to replace the root() function for an explicit [All] member. For a strange reason, root() worked very slow when I tried LinkedMember() function on my calculated measures.
Hope this helps you too!

Multiplying Quantity * Price in Calculated Member

I know MDX is used for much more sophisticated math, so please forgive the simplistic scenario, but this is one of my first Calculated members.
When I multiply Price x Quantity, the AS cube's data browser has the correct information in the leaf elements, but not in any of the parents. The reason seems to be that I want something like (1 * 2) + (2 * 3) + (4 * 5) and not (7 * 10) which think I am getting as a result of how the Sum is done on columns.
Is the IsLeaf expression intended to be used in these circumstances? Or is there another way? If so, are there any examples as simple as this I can see?
This Calculated member that I tried to create is just this:
[Measures].[Price]*[Measures].[Quantity]
The result for a particular line item (the leaf) is correct. But the results for, say, all of april, is an incredibly high number.
Edit:
I am now considering that this might be an issue regarding bad data. It would be helpful though if someone could just confirm that the above calculated member should be work under normal circumstances.
Here it is a blog post dealing with this particular problem: Aggregating the Result of an MDX Calculation Using Scoped Assignments.
For leaf level computations resulting in something that can then be summed, MDX is rather complex and slow.
The simplest way to do what you want to achieve would be to make this a normal measure, based on the Price x Quantity calculation defined in the data source view.