Is there such a thing as a 'constant' in Analysis Services? - variables

Is there such a thing as a constant in SSAS?
Example (this is really happening where I'm at) everyone agrees that gigs to mb be converted by 1000 (not 1024) and tb to mb by 1000000.
Where would you store a number like that used across the board?

If it's inside the cube, could you create a calculated member that stores it? Define it in the cube's calculation script, it's fine with constants in there.
In cube calculation script:
CREATE MEMBER CURRENTCUBE.Measures.MBtoGigs AS 1000
Query against the cube:
SELECT Measures.MBtoGigs ON COLUMNS FROM [Cube]
One possible pitfall I would point out, is that using constants like this can alter the way you'd expect the NON EMPTY behaviour to work in your queries - As a constant is never 'empty'.
Having said that, you can define you own non-empty-behaviour for calculated measures, so remember to try that with any calculated measures that involve constants if you experience any issues.

where\how do you need to use it?
you can always create a fact table with a column with that value (1000), whitch will become a measure group, and set the aggregation type on the measure with to "lastNonempty".
Since this value is on its own MG it can be easly used on the expression property of another measure on a different MG

Related

MS SSAS - Need to return a measure in a calculated member based on a tuple set and a max ofunderlying ID

I require some more advanced MDX knowledge than mine.
I need to get the RepoRate_MAX for repo products, at book and instrument level, but also looking at the Java code I'm replacing that code always uses the max MurexId.
How can I perform the below (I've placed MAX in here on the dimension but this is wrong) and I need the combo of the dimensions and also the MAX MurexId:
[Measures].[RepoRate_VAL] = (([Deal].[ProductType].&[REPO],[Deal].[Book],[Deal].[Instrument],MAX([Deal].[MurexId])),[Measures].[RepoRate_MAX])
I'm sure it's a simple one but my mind is part way between the Java OO and MDX worlds currently haha :D
Thanks
Leigh
So after some experimenting I found out about the TAIL and Item MDX functions.
I think at one point I did get it working, but didn't make a note of what did work. I was playing around with this and variants of it..but most versions ended up in unusable query times:
[Measures].[RepoRate_VAL] = (([Deal].[ProductType].&[REPO],[Deal].[Book],[Deal].[Instrument],TAIL(EXISTING([Deal].[MurexId].[MurexId])).Item(0)),[Measures].[RepoRate_MAX])
So I then decided to push the RepoRate calculation back to the SQL data preparation script. Cleaner/smoother data is always better and then to have simple calculated members.
I used SQL to determine the RepoRate from tradelevel with MAX(MurexId) and GROUP BY on Book, Instrument to then update my main fact table to ensure that the correct RepoRate was set at Book, Instrument level.
Thus the calculated member is then:
[Measures].[RepoRate_VAL] = (([Deal].[Book],[Deal].[Instrument]),[Measures].[RepoRate_MAX])
Fast data prep and a fast calculated member on the Excel/Pivot/UI layer.

Find out the amount of space each field takes in Google Big Query

I want to optimize the space of my Big Query and google storage tables. Is there a way to find out easily the cumulative space that each field in a table gets? This is not straightforward in my case, since I have a complicated hierarchy with many repeated records.
You can do this in Web UI by simply typing (and not running) below query changing to field of your interest
SELECT <column_name>
FROM YourTable
and looking into Validation Message that consists of respective size
Important - you do not need to run it – just check validation message for bytesProcessed and this will be a size of respective column
Validation is free and invokes so called dry-run
If you need to do such “columns profiling” for many tables or for table with many columns - you can code this with your preferred language using Tables.get API to get table schema ; then loop thru all fields and build respective SELECT statement and finally Dry Run it (within the loop for each column) and get totalBytesProcessed which as you already know is the size of respective column
I don't think this is exposed in any of the meta data.
However, you may be able to easily get good approximations based on your needs. The number of rows is provided, so for some of the data types, you can directly calculate the size:
https://cloud.google.com/bigquery/pricing
For types such as string, you could get the average length by querying e.g. the first 1000 fields, and use this for your storage calculations.

Is there a way to specify or force data type of calculated member?

I've got data type of every measure in my cube specified as Currency.
I've also got calculated members, some of them have iif(isempty([Measures].[Measure1]) or [Measures].[Measure1] = 0, null, 100 * [Measures].[Measure2] / [Measures].[Measure1]) logic.
I'm accessing this cube using MdxClient (it uses AdomdCommand.ExecuteXmlReader internally) and have noticed that some of this calculated members are returned as xsd:double not xsd:decimal. So I assume that they are calculated as Double not Currency. Query results are mapped to strongly typed data set at client side, so returned type is important to me.
I can 'force' ssas to return xsd:decimal by wrapping each of calulated members with VBA!CDec or just CDec, but this seriously degrades perfomance.
Is there a smarter way to set or force calculated member to be Currency? Or at least be returned as xsd:decimal by AdomdCommand.ExecuteXmlReader?
I believe that it is not possible to set the data type of your calculated measures. The data type will be assigned based on the measures used in your calculation as well as the type of calculation you are performing.
Have the code of calculated member this way:
MEMBER [Measures].[CurrencyMeasure]
AS
iif(isempty([Measures].[Measure1]) or
[Measures].[Measure1] = 0, null,
100 * [Measures].[Measure2] / [Measures].[Measure1]), FORMAT_STRING="Currency"

How do I create custom rollup types in icCube?

How do I create custom rollup types in icCube?
Say, I need WAvg (which is already implemplemented there) instead of plain Avg function. But I it is not on the dropdown list in measure creation form. What should I do now?
Alexander, I assume you're talking about the cube builder.
The weighted-average is not available in the list of available aggregation types because there's no straitforward way to implement it at cube level. Aggregation types available for standard measures are simple calculations. Those calculation are meant to be very fast for millions of rows. You've two kinds of average available for standard measures : 'average on leafs(rows)' and 'average on children', which might be near what you're looking for.
In the case of a weighted average you have to create a calculated measure: you need to defined the values to "weight" your underlying measure against. The documentation weighted-average is giving several examples.

How to distinguish in master data and calculated interpolated data?

I'm getting a bunch of vectors with datapoints for a fixed set of values, in the example below you see an example of a vector with a value per time point
1D:2
2D:
7D:5
1M:6
6M:6.5
But alas not for all the timepoints is a value available. All vectors are stored in a database and with a trigger we calcuate the missing values by interpolation, or possibly a more advanced algorithm. Somehow I want to be able to tell which data points have been calculated and which have been original delivered to us. Of course I can add a flag column to the table with values indicating whether the value was a master value or is calculated, but I'm wondering whether there is a more sophisticated way. We probably don't need to determine on a regular basis, so cpu cycles are not an issue for determining or insertion.
The example above shows some nice looking numbers but in reality it would look more somethin like 3.1415966533.
The database for storage is called oracle 10.
cheers.
Could you deactivate the trigger temporarily?