mdx member concatenation in Essbase - mdx

a complete MDX/Essbase newbie is looking for your help.
I have a MDX query:
SELECT
{([Version].[FINAL])} ON COLUMNS
,crossjoin(crossjoin(crossjoin(crossjoin(crossjoin(crossjoin(crossjoin(crossjoin(crossjoin({ [Period].[Jan],[Period].[Dec],[Period].[Sep] },{ [Entity].[BE08008309], [Entity].[BTSEMEALA] }),{ [Years].[2018],[Years].[2017],[Years].[2014] }),{ [ICP].[ICP] }),{ [Currency].[USD] }),{ [Custom1].[TOPC1] }),{ [Custom2].[TOPC2] }),{ [Custom3].[TOPC3] }),{ [Scenario].[Actual],[Scenario].[Junfor],[Scenario].[PlanRestate] }),{ [Account].[RF_ACCUMDEP],[Account].[COSAMORT] })
ON ROWS FROM [EssRptg.EssRptg]
which gives me an output containing a row/tuple, such as:
(January, BE08008309-North, Central and East HQ Mtmt Adj., 2014, ICP, US Dollar, TOPC1, TOPC2, TOPC3, ACTUAL, ACCUMDEP - Accumulated Depreciation) 4321.878
Could this query be rewritten to concatenate every member with a pipe "|" for instance? Such as:
(|January|, |BE08008309-North, Central and East HQ Mtmt Adj.|, |2014|, |ICP|, |US Dollar|, |TOPC1|, |TOPC2|, |TOPC3|, |ACTUAL, ACCUMDEP - Accumulated Depreciation|) 4321.878
Your help would be much appreciated.
Thank you.
Bachatero

fyi, I've found a WA rewritting my MDX query to include DIMENSION PROPERTIES [Period].[MEMBER_NAME],[Period].[MEMBER_ALIAS], [Entity.[MEMBER_NAME],[Entity].[MEMBER_ALIAS], and so forth.
my query would then look like the following:
SELECT
{([Version].[FINAL])} ON COLUMNS
,crossjoin(crossjoin(crossjoin(crossjoin(crossjoin(crossjoin(crossjoin(crossjoin(crossjoin({ [Period].[Jan],[Period].[Dec],[Period].[Sep] },{ [Entity].[BE08008309], [Entity].[BTSEMEALA] }),{ [Years].[2018],[Years].[2017],[Years].[2014] }),{ [ICP].[ICP] }),{ [Currency].[USD] }),{ [Custom1].[TOPC1] }),{ [Custom2].[TOPC2] }),{ [Custom3].[TOPC3] }),{ [Scenario].[Actual],[Scenario].[Junfor],[Scenario].[PlanRestate] }),{ [Account].[RF_ACCUMDEP],[Account].[COSAMORT] }) DIMENSION PROPERTIES [Period].[MEMBER_NAME],[Period].[MEMBER_ALIAS], [Entity].[MEMBER_NAME],[Entity].[MEMBER_ALIAS],[Years].[MEMBER_NAME],[ICP].[MEMBER_NAME],[Currency].[MEMBER_NAME],[Currency].[MEMBER_ALIAS],[Custom1].[MEMBER_NAME],[Custom1].[MEMBER_ALIAS],[Custom2].[MEMBER_NAME],[Custom2].[MEMBER_ALIAS],[Custom3].[MEMBER_NAME],[Scenario].[MEMBER_NAME],[Account].[MEMBER_NAME],[Account].[MEMBER_ALIAS] ON ROWS FROM [EssRptg.EssRptg]
Then I parsed the output of the query while removing unnecessary "[MEMBER_NAME] = " and "[MEMBER_ALIAS] = " strings and by splitting the string by "type: STRING," I was ready to overcome my issue with "comma" inside some of my Member columns which was my intitial problem.
Cheers.
Bachatero

Related

Add computed field to Query in Grafana using JSON API als data source

What am I trying to achieve:
I would like to have a time series chart showing the total number of members in my club at any time. This member count should be calculated by using the field "Eintrittsdatum" (joining-date) and "Austrittsdatum" (leaving-date). I’m thinking of it as a running sum - every filled field with a joining-date means +1 on the member count, every leaving-date entry is a -1.
Data structure
I’m calling the API of webling.ch with a secret key. This is my data structure with sample data per member:
[
{
"type": "member",
"meta": {
"created": "2020-03-02 11:33:00",
"createuser": {
"label": "Joana Doe",
"type": "user"
},
"lastmodified": "2022-12-06 16:32:56",
"lastmodifieduser": {
"label": "Joana Doe",
"type": "user"
}
},
"readonly": true,
"properties": {
"Mitglieder ID": 99,
"Anrede": "Dear",
"Vorname": "Jon",
"Name": "Doe",
"Strasse": "Doeington Street",
"Adresszusatz": null,
"PLZ": "9999",
"Ort": "Doetown",
"E-Mail": "jon.doe#doenet.net",
"Telefon Privat": null,
"Telefon Geschäft": null,
"Mobile": "099 877 54 54",
"Geschlecht": "m",
"Geburtstag": "1966-03-10",
"Mitgliedschaftstyp": "Aktivmitgliedschaft",
"Eintrittsdatum": "2020-03-01",
"Austrittsdatum": null,
"Passfoto": null,
"Wordpress Benutzername": null,
"Wohnhaft im Glarnerland": false,
"Lat": "43.1563379",
"Long": "6.0474622"
},
"parents": [
240
],
"children": {
},
"links": {
"debitor": [
2124,
3056,
3897
],
"attendee": [
2576
]
},
"id": 1815
}
]
Grafana data source
I am using the “JSON API” by Marcus Olsson: GitHub - grafana/grafana-json-datasource: A data source plugin for loading JSON APIs into Grafana.
Grafana v9.3.1 (89b365f8b1) on Linux
My current approach
Queries:
Query C - uses a filter on the source-API to only show entries with "Eintrittsdatum" IS NOT EMPTY
Field 1 (alias "datum") has a JSONata-Query of:
properties.Eintrittsdatum
Field 2 (alias "names") should return the full name and has a query of:
$map($.properties, function($v) {(
($v.Vorname&" "&$v.Name);
)})
Field 3 (alias "value") should return "1" for every entry and has a query of:
$map($.properties, function($v) {(
(1);
)})
Query D - uses a filter on the source-API to only show entries with "Austrittsdatum" IS NOT EMPTY
Field 1 (alias "datum") has a JSONata-Query of:
properties.Austrittsdatum
Field 2 (alias "names") should return the full name and has a query of:
$map($.properties, function($v) {(
($v.Vorname&" "&$v.Name);
)})
Field 3 (alias "value") should return "1" for every entry and has a query of:
$map($.properties, function($v) {(
(1);
)})
Here's a screenshot to clarify things
(https://zigerschlitzmakers.ch/wp-content/uploads/2023/01/ScreenshotGrafana-1.png)
Transformations:
My applied transformations
(https://zigerschlitzmakers.ch/wp-content/uploads/2023/01/ScreenshotGrafana-2.png)
What's working
I can correctly gather the number of members added/subtracted per day.
What's not working
I can't get the graph to display the way i want: I'd like to have a running sum of these numbers instead of the following two graphs.
Time series graph with merged queries
(https://zigerschlitzmakers.ch/wp-content/uploads/2023/01/ScreenshotGrafana-3.png)
Time series graph with unmerged queries
(https://zigerschlitzmakers.ch/wp-content/uploads/2023/01/ScreenshotGrafana-4.png)
I can't get the names to display within the tooltip of the data points (really not THAT necessary).

Select Dynamic Column from JSON_TABLE

I have a Nested JSON like
{
"Col1" : Val1,
"Col2" : [
"NestedCol1" : "Nested Value 1",
"NestedCol2" : "Nested Value 2",
"NestedCol3" : "Nested Value 3",
]
"Col3" : [
"NestedCol1" : "Nested Value 1",
"NestedCol2" : "Nested Value 2",
"NestedCol3" : "Nested Value 3",
]
}
Both Nested Columns will have same column names,
i want to select based on the parameter either NestedCol2 values or NestedCol3 Values.
want to create more like genric pl/sql fucntions where user can pass the column names , as in future more nested columns with same structure can exists too.

SQL JOIN values to ARRAY to create another ARRAY field

maybe my searching is poor, but I couldn't find this question or answer anywhere.
Suppose I have a table CLASSROOM like:
[ { teacher_id: T1,
students: [S11, S12, S13]},
{ teacher_id: T2,
students: [S21, S22, S23]}]
The "students" field is an array of student_id's. There is also a table STUDENTS like:
[ { id: S11, name: "Aaron"}, { id: S12, name: "Bob"}, { id: S13, name: "Charlie"},
{ id: S21, name: "Amy"}, { id: S22, name: "Becky"}, { id: S23, name: "Cat"} ]
I want to create the output table which has rows like:
[ { teacher_id: T1,
students: [S11, S12, S13 ],
names: [ "Aaron", "Bob", "Charlie" ] },
{ teacher_id: T2,
students: [S21, S22, S23 ],
names: [ "Amy", "Becky", "Cat" ] } ]
(Yes, this example is silly, but I don't want to bore you with my case.)
I suppose I could FLATTEN the CLASSROOM table, then do a straight join, but my real table is large & complicated enough that I want to avoid it if I can. Is there a better way?
Note: assume students can be in multiple classes. Teachers (teacher_id) are unique.
The idea is to flatten the array and reaggregate. I'm not 100% sure of the syntax in Snowflake, but I think this will work:
select c.*,
(select array_agg(ss.name) within group (order by s.index) as student_names
from table(flatten(input => c.students, mode => 'array')) s join
students ss
on ss.id = s.value
) as names
from classroom c;
Based on https://stackoverflow.com/users/1144035/gordon-linoff, I got the following to work in Snowflake:
create table CLASSROOM (teacher_id VARCHAR, students ARRAY);
insert into CLASSROOM select $1, parse_json($2)
from values ('T1','["S11","S12","S13"]'),('T2','["S21","S22","S23"]');
create table STUDENTS (id VARCHAR, name VARCHAR);
insert into STUDENTS values ('S11','Aaron'),('S12','Bob'),('S13','Charlie'),('S21','Amy'),('S22','Becky'),('S23','Cat');
select teacher_id,
array_agg(s.value::String) as student_ids,
array_agg(ss.name) as student_names
from CLASSROOM, table(flatten(input => CLASSROOM.students, mode => 'array')) s
join STUDENTS ss on ss.id = s.value
group by teacher_id, s.SEQ
order by teacher_id;

How to perform a SELECT in the results returned from a GROUP BY Druid?

I am having a hard time converting this simple SQL Query below into Druid:
SELECT country, city, Count(*)
FROM people_data
WHERE name="Mary"
GROUP BY country, city;
So I came up with this query so far:
{
"queryType": "groupBy",
"dataSource" : "people_data",
"granularity": "all",
"metric" : "num_of_pages",
"dimensions": ["country", "city"],
"filter" : {
"type" : "and",
"fields" : [
{
"type": "in",
"dimension": "name",
"values": ["Mary"]
},
{
"type" : "javascript",
"dimension" : "email",
"function" : "function(value) { return (value.length !== 0) }"
}
]
},
"aggregations": [
{ "type": "longSum", "name": "num_of_pages", "fieldName": "count" }
],
"intervals": [ "2016-07-20/2016-07-21" ]
}
The query above runs but it doesn't seem like groupBy in the Druid datasource is even being evaluated since I see people in my output with names other than Mary. Does anyone have any input on how to make this work?
Simple answer is that you cannot select arbitrary dimensions in your groupBy queries.
Strictly speaking even SQL query does not make sense. If for a given combination of country, city there are many different values of name and street, then how do you squeeze that into a single row? You have to aggregate them, e.g. by using max function.
In this case you can include the same column in your data as both dimension and metric, e.g. name_dim and name_metric, and include corresponding aggregation over your metric, max(name_metric).
Please note, that if these columns, name etc, have high granularity values, then that will kill Druid's roll-up feature.

Cannot pass input field of repeated record type into Bigquery UDF

When I pass an input field of repeated record type into Bigquery UDF, it keeps saying that the input field is not found.
This is my 2 rows of data:
{"name":"cynthia", "Persons":[ { "name":"john","age":1},{"name":"jane","age":2} ]}
{"name":"jim","Persons":[ { "name":"mary","age":1},{"name":"joe","age":2} ]}
This is the schema of the data:
[
{"name":"name","type":"string"},
{"name":"Persons","mode":"repeated","type":"RECORD",
"fields":
[
{"name": "name","type": "STRING"},
{"name": "age","type": "INTEGER"}
]
}
]
And this is the query:
SELECT
name,maxts
FROM
js
(
//input table
[dw_test.clokTest_bag],
//input columns
name, Persons,
//output schema
"[
{name: 'name', type:'string'},
{name: 'maxts', type:'string'}
]",
//function
"function(r, emit)
{
emit({name: r.name, maxts: '2'});
}"
)
LIMIT 10
Error I got when trying to run the query:
Error: 5.3 - 15.6: Undefined input field Persons
Job ID: ord2-us-dc:job_IPGQQEOo6NHGUsoVvhqLZ8pVLMQ
Would someone please help?
Thank you.
In your list of input columns, list the leaf fields directly:
//input columns
name, Persons.name, Persons.age,
They'll still appear in their proper structure when you get the records in your UDF.