Fast dynamic named set calculation - ssas

I have a long complex query with a lot of calculations and conditions but the main structure looks like this:
WITH
MEMBER [Id1] AS [Level].[Level1].CurrentMember.Member_Key
MEMBER [Id2] AS [Level].[Level2].CurrentMember.Member_Key
MEMBER [Level].[Level1].[FirstSet] AS NULL
MEMBER [Level].[Level1].[SecondSet] AS NULL
SET [Set 1] AS {some processed set members}
SET [Set 2] AS {some other processed set members}
SET [Common CrossJoin Set] AS [Level].[Level2].Members
MEMBER [Calculated Measure 1] AS
IIF([Level].[Level].CurrentMember.Member_Key = 'FirstSet',
SUM(existing [Set 1]),
IIF([Level].[Level].CurrentMember.Member_Key = 'SecondSet',
SUM(existing [Set 2]),
SUM([Measures].[Measure1]) * 15
)
)
MEMBER [Calculated Measure 2] AS
IIF([Level].[Level].CurrentMember.Member_Key = 'FirstSet',
SUM(existing [Set 1]),
IIF([Level].[Level].CurrentMember.Member_Key = 'SecondSet',
SUM(existing [Set 2]),
SUM([Measures].[Measure2]) * 20
)
)
SELECT
{ [Id1], [Id2], [Calculated Measure 1], [Calculated Measure 2]} ON COLUMNS,
{ ([Common CrossJoin Set], [Level].[Level1].[FirstSet]),
([Common CrossJoin Set], [Level].[Level1].[SecondSet])
} ON ROWS
FROM [Cube]
So resulted table looks like this:
║ ---------------║ ---------------------------║ Id1 ║ Id2 ║ Measure1 ║ Measure2 ║
║ L2 Member ║ L1.FirstSet Member ║ L2-1 ║ L1-8 ║ 1 ║ 5 ║
║ L2 Member ║ L1.FirstSet Member ║ L2-2 ║ L1-9 ║ 2 ║ 6 ║
║ L2 Member ║ L1.SecondSet Member ║ L2-3 ║ L1-98 ║ 3 ║ 7 ║
║ L2 Member ║ L1.SecondSet Member ║ L2-4 ║ L1-99 ║ 4 ║ 8 ║
The result is correct but the query is very slow (>4sec). My actual query is bigger and contains a lot of such Sets and measures so it seems like the problem is in existing function and overall structure that prevents engine from inner optimizations to be performed.
This kind of solution is wrong and ugly, but how can I rewrite it and get the same result faster?

I suspect that the bottleneck is because when you use Iif neither of the logical branches is NULL so you're not getting block mode calculations: this is a better way of using Iif : Iif(someBoolean, X, Null) or Iif(someBoolean, Null, x) but unfortunately in your case you cannot have null in either.
Maybe you could try implementing this type of pattern suggested by Mosha for replacing Iif:
WITH
MEMBER Measures.[Normalized Cost] AS [Measures].[Internet Standard Product Cost]
CELL CALCULATION ScopeEmulator
FOR '([Promotion].[Promotion Type].&[No Discount],measures.[Normalized Cost])'
AS [Measures].[Internet Freight Cost]+[Measures].[Internet Standard Product Cost]
MEMBER [Ship Date].[Date].RSum AS Sum([Ship Date].[Date].[Date].MEMBERS), SOLVE_ORDER=10
SELECT
[Promotion].[Promotion Type].[Promotion Type].MEMBERS on 0
,[Product].[Subcategory].[Subcategory].MEMBERS*[Customer].[State-Province].[State-Province].MEMBERS ON 1
FROM [Adventure Works]
WHERE ([Ship Date].[Date].RSum, Measures.[Normalized Cost])
This is from this blog post about optimizing Iif: http://sqlblog.com/blogs/mosha/archive/2007/01/28/performance-of-iif-function-in-mdx.aspx
So looking at one of your calculations - this one:
MEMBER [Calculated Measure 1] AS
IIF([Level].[Level].CurrentMember.Member_Key = 'FirstSet',
SUM(existing [Set 1]),
IIF([Level].[Level].CurrentMember.Member_Key = 'SecondSet',
SUM(existing [Set 2]),
SUM([Measures].[Measure1]) * 15
)
)
I think we could initially break it down to this:
MEMBER [Measures].[x] AS SUM(existing [Set 1])
MEMBER [Measures].[y] AS SUM(existing [Set 2])
MEMBER [Measures].[z] AS SUM([Measures].[Measure1]) * 15
MEMBER [Calculated Measure 1] AS
IIF([Level].[Level].CurrentMember IS [Level].[Level].[Level].[FirstSet],
[Measures].[x],
IIF([Level].[Level].CurrentMember IS [Level].[Level].[Level].[SecondSet],
[Measures].[y],
[Measures].[z]
)
)
Now trying to apply Mosha's pattern (not something I've tried before so you will need to adjust accordingly)
MEMBER [Measures].[z] AS SUM([Measures].[Measure1]) * 15
MEMBER [Measures].[y] AS SUM(existing [Set 2])
MEMBER [Measures].[x] AS SUM(existing [Set 1])
MEMBER [Calculated Measure 1 pre1] AS [Measures].[z]
CELL CALCULATION ScopeEmulator
FOR '([Level].[Level].[Level].[SecondSet],[Calculated Measure 1 pre1])'
AS [Measures].[y]
MEMBER [Calculated Measure 1] AS [Calculated Measure 1 pre1]
CELL CALCULATION ScopeEmulator
FOR '([Level].[Level].[Level].[FirstSet],[Calculated Measure 1])'
AS [Measures].[x]

Related

How to show nullable decimal values without unnecessary zeros?

I have a table like below:
CREATE TABLE a
(
ID INT,
V DECIMAL(28, 10) NULL
)
INSERT INTO a(ID, V)
VALUES(1, 12.345)
INSERT INTO a(ID)
VALUES(2)
The desired output is like
╔════╤═══════╗
║ ID │ V ║
╠════╪═══════╣
║ 1 │ 12.35 ║
╟────┼───────╢
║ 2 │ ║
╚════╧═══════╝
But with this query, I got NULL for row 2:
SELECT ID, ROUND([V], 2) AS V
FROM a;
ID V
1 12.3500000000
2 NULL
With this query I got unnecessary zeros for row 2:
SELECT ID, ROUND(ISNULL(CAST(V AS VARCHAR(50)), ''), 2) AS V
FROM a;
ID V
1 12.35
2 0
Can anybody help please? Database is SQL Server 2005.
UPDATED:
This query will result in unwanted scale:
SELECT ID, ISNULL(CAST(ROUND(V, 2) AS VARCHAR(50)), '') AS V
FROM a;
ID V
1 12.3500000000
2
You can't do this, because you have V as a number when you Round
select 1 union select ''
This leads to output
1
0
So, what you need to do is round V then convert to a string while keeping the rounding. Easier said than done :
SELECT ID, case when v is null then '' else left(CAST(ROUND(V,2) AS VARCHAR(50)),charindex('.', CAST(ROUND(V,2) AS VARCHAR(50)))+2) end AS V
FROM a;
SQL Fiddle
I just use some brute force string manipulation to find the decimal and round that way
If you want to keep the NULL value as NULL, then simply remove the ISNULL():
SELECT ID, ROUND(CAST(V AS VARCHAR(50)), 2) AS V
FROM a;
If you know the precision that you want, then I would suggest using either STR() or casting to the decimal with the specified precision.
you should
ROUND() to 2 decimal places
then CAST() to string
follow by ISNULL() to convert NULL to empty string
query :
SELECT ID, ISNULL(CAST(ROUND(V, 2) AS VARCHAR(50)), '') AS V
FROM a;

SQL server Query for Bill of materials quantity

I have a bill of material (bom). Lets call this BOM 301755.
BOM 301755 is made of these parts
31161201 = need 1 pc of this
29975413 = need 2 pcs of this
299756 = need 2 pcs of this
And 305958 = need 1 pc of this
This would be level zero.
Now Lets focus on one of the part. Part: 29975413.
Part 29975413 is made of:
PLTSSL902 = 1pc
CAPSSL902 = 1pc
PIPSSL4SCH40 = 3.96
And LABSTR = 0.166
Now since we need 2 pcs of 29975413.
how can I do the query so it will show as follows:
PLTSSL902 = 1pc x 2 = 2 pc
CAPSSL902 = 1pc x 2 = 2 pc
PIPSSL4SCH40 = 3.96 x 2 = 7.92
And LABSTR = 0.166 x 2 = 0.332
I draw this for easier to read. :)
thank you
To make it a bit more general in its application I modified #dazedandconfused's answer a little bit:
;WITH bom as (
SELECT pid p,cid c, qty q, 0 bomlvl FROM #t WHERE pid='301755' // start id
UNION ALL
SELECT pid, cid, q*qty, bomlvl+1 FROM #t INNER JOIN bom ON c=pid
)
SELECT * from bom a WHERE NOT EXISTS (SELECT 1 FROM bom WHERE p=a.c)
This query calculates the BOM-level for each line and will list only list only those elements of the BOM that do not have children, regardless of how many levels your BOM might have. A fiddle can be found here.
My example will deliver the result:
p c q bomlvl
-------- ------------ ----- ------
305958 311620 4 1
305958 311620 0.1 1
299756 RDBSSL012 0.2 1
299756 RDBSSL012 6.834 1
29975413 PLTSSL902 2 1
29975413 CAPSSL4SCH40 2 1
29975413 PIPSSL4SCH40 7.92 1
29975413 LABSTR 0.332 1
31161201 PIPSSL2SCH40 4 1
You could go one step further and group the results by their c-id to get their total amounts used in a particular BOM. A table valued function would be the best way of writing this, where you pass the initial Id as a parameter. I cannot demonstrate this in my data.stackexchange-fiddle since functions cannot reference temporary tables, but the function definition should look more or less like this:
CREATE FUNCTION bomqty ( #pid varchar(20) ) RETURNS TABLE AS BEGIN
;WITH bom as (
SELECT pid p,cid c, qty q, 0 bomlvl FROM tbl WHERE pid=#pid
UNION ALL
SELECT pid, cid, q*qty, bomlvl+1 FROM tbl INNER JOIN bom ON c=pid
)
RETURN SELECT c item,sum(q) totalqty FROM bom a
WHERE NOT EXISTS (SELECT 1 FROM bom WHERE p=a.c)
GROUP BY c
END;
The function can then be used like any other table like this:
SELECT * FROM bomqty('301755')
This will get you
item totalqty
------------ --------
311620 4.1
CAPSSL4SCH40 2
LABSTR 0.332
PIPSSL2SCH40 4
PIPSSL4SCH40 7.92
PLTSSL902 2
RDBSSL012 7.034
You can use a recursive Common Table Expression to walk the hierarchy of parts, passing the quantity of the parent and multiplying the children's quantities by it.
DECLARE #bom int = 301755
CREATE TABLE #t(
BOM int,
KitID varchar(20),
SubAssy varchar(20),
BOMLevel int,
StdQty float
)
INSERT #t(BOM, KitID, SubAssy, BOMLevel, StdQty) VALUES
(301755, '301755', '31161201', 0, 1),
(301755, '301755', '29975413', 0, 2),
(301755, '301755', '299756', 0, 2),
(301755, '301755', '305958', 0, 1),
(301755, '305958', '311620', 1, 4),
(301755, '305958', '311620', 1, .1),
(301755, '299756', 'RDBSSL012', 1, .1),
(301755, '299756', 'RDBSSL012', 1, 3.417),
(301755, '29975413', 'PLTSSL902', 1, 1),
(301755, '29975413', 'CAPSSL4SCH40', 1, 1),
(301755, '29975413', 'PIPSSL4SCH40', 1, 3.96),
(301755, '29975413', 'LABSTR', 1, .166),
(301755, '31161201', 'PIPSSL2SCH40', 1, 4)
;WITH cte AS (
SELECT KitID, SubAssy, StdQty FROM #t WHERE KitID = #bom
UNION ALL
SELECT #t.KitID, #t.SubAssy, cte.StdQty * #t.StdQty FROM #t
INNER JOIN cte ON cte.SubAssy = #t.KitID
)
SELECT * FROM cte ORDER BY KitID, SubAssy
something like this should work:
CREATE TABLE #TableBom
(
Bom INT
,KitId INT
,SubAssy VARCHAR(20)
,BomLevel INT
,StdQty DECIMAL(10 ,3)
);
INSERT INTO #TableBom
SELECT 301755, 301755, '29975413', 0, 2
UNION ALL
SELECT 301755, 29975413, 'PLTSSL902', 1, 1
UNION ALL
SELECT 301755, 29975413, 'CAPSSL902', 1, 1
UNION ALL
SELECT 301755, 29975413, 'PIPSSL4SCH40',1,3.96
UNION ALL
SELECT 301755, 29975413, 'LABSTR', 1, 0.166
UNION ALL
SELECT 301755, 299756, 'RDBSSL012', 1, 3.147
SELECT b.Bom
,b2.SubAssy
,CONCAT(b2.SubAssy, ' = ' ,CAST(b2.StdQty AS DECIMAL(10,3)) ,' pc x ' ,CAST(b.StdQty AS DECIMAL(10,3)) ,' = ' ,CAST((b2.StdQty * b.StdQty) AS DECIMAL(10,2)) ,' pc') AS Calc
FROM #TableBom AS b
INNER JOIN #TableBom AS b2 ON b.SubAssy = CAST(b2.KitId AS VARCHAR(20));
Bom SubAssy Calc
301755 PLTSSL902 PLTSSL902 = 1.000 pc x 2.000 = 2.00 pc
301755 CAPSSL902 CAPSSL902 = 1.000 pc x 2.000 = 2.00 pc
301755 PIPSSL4SCH40 PIPSSL4SCH40 = 3.960 pc x 2.000 = 7.92 pc
301755 LABSTR LABSTR = 0.166 pc x 2.000 = 0.33 pc
EDIT:
if you only want to include the 29975413 you can include a WHERE b.SubAssy = '29975413'

Split a string into rows using pure SQLite

Using SQLite, I'd like to split a string in the following way.
Input string:
C:\Users\fidel\Desktop\Temp
and have the query return these rows:
C:\
C:\Users\
C:\Users\fidel\
C:\Users\fidel\Desktop\
C:\Users\fidel\Desktop\Temp
In other words, I'd like to split a file path into its constituent paths. Is there a way to do this in pure SQLite?
This is possible with a recursive common table expression:
WITH RECURSIVE split(s, last, rest) AS (
VALUES('', '', 'C:\Users\fidel\Desktop\Temp')
UNION ALL
SELECT s || substr(rest, 1, 1),
substr(rest, 1, 1),
substr(rest, 2)
FROM split
WHERE rest <> ''
)
SELECT s
FROM split
WHERE rest = ''
OR last = '\';
(You did not ask for a reasonable way.)
Recursive CTE:
WITH RECURSIVE cte(org, part, rest, pos) AS (
VALUES('C:\Users\fidel\Desktop\Temp', '','C:\Users\fidel\Desktop\Temp'|| '\', 0)
UNION ALL
SELECT org,
SUBSTR(org,1, pos + INSTR(rest, '\')),
SUBSTR(rest, INSTR(rest, '\')+1),
pos + INSTR(rest, '\')
FROM cte
WHERE INSTR(rest, '\') > 0
)
SELECT *
FROM cte
WHERE pos <> 0
ORDER BY pos;
SqlFiddleDemo
Output:
╔═════════════════════════════╗
║ part ║
╠═════════════════════════════╣
║ C:\ ║
║ C:\Users\ ║
║ C:\Users\fidel\ ║
║ C:\Users\fidel\Desktop\ ║
║ C:\Users\fidel\Desktop\Temp ║
╚═════════════════════════════╝
How it works:
org - original string does not change
part - simply `LEFT` equivalent of original string taking pos number of chars
rest - simply `RIGHT` equivalent, rest of org string
pos - position of first `\` in the rest
Trace:
╔══════════════════════════════╦══════════════════════════════╦════════════════════════════╦═════╗
║ org ║ part ║ rest ║ pos ║
╠══════════════════════════════╬══════════════════════════════╬════════════════════════════╬═════╣
║ C:\Users\fidel\Desktop\Temp ║ C:\ ║ Users\fidel\Desktop\Temp\ ║ 3 ║
║ C:\Users\fidel\Desktop\Temp ║ C:\Users\ ║ fidel\Desktop\Temp\ ║ 9 ║
║ C:\Users\fidel\Desktop\Temp ║ C:\Users\fidel\ ║ Desktop\Temp\ ║ 15 ║
║ C:\Users\fidel\Desktop\Temp ║ C:\Users\fidel\Desktop\ ║ Temp\ ║ 23 ║
║ C:\Users\fidel\Desktop\Temp ║ C:\Users\fidel\Desktop\Temp ║ ║ 28 ║
╚══════════════════════════════╩══════════════════════════════╩════════════════════════════╩═════╝
If you want to search for the values ​​individually, use the code below:
WITH RECURSIVE split(content, last, rest) AS (
VALUES('', '', 'value1§value2§value3§value4§value5§value6§value7')
UNION ALL
SELECT
CASE WHEN last = '§'
THEN
substr(rest, 1, 1)
ELSE
content || substr(rest, 1, 1)
END,
substr(rest, 1, 1),
substr(rest, 2)
FROM split
WHERE rest <> ''
)
SELECT
REPLACE(content, '§','') AS 'ValueSplit'
FROM
split
WHERE
last = '§' OR rest ='';
Result:
**ValueSplit**
value1
value2
value3
value4
value5
value6
value7
I hope I can help people with the same problem.
There's simpler alternative to the recursive CTE, that also can be applied to a number of file paths in a result set (or generally any delimited strings that you want to "split" into multiple rows by a separator).
SQLite has JSON1 extension. It's compatible with SQLite >= 3.9.0 (2015-10-14), but sqlite3 is almost always compiled with it now (e.g. Ubuntu, Debian, official Python Docker images and so on, and you can check it with PRAGMA compile_options and this answer has a little more detail on it).
JSON1 has json_each, which is one of the two table-valued functions in the extension that:
walk the JSON value provided as their first argument and return one row for each element.
Hence if you can turn your string into a JSON array string, this function will do the rest. And it's not hard to do.
const sql = `
WITH input(filename) AS (
VALUES
('/etc/redis/redis.conf'),
('/run/redis/redis-server.pid'),
('/var/log/redis-server.log')
), tmp AS (
SELECT
filename,
'["' || replace(filename, '/', '", "') || '"]' as filename_array
FROM input
)
SELECT (
SELECT group_concat(ip.value, '/')
FROM json_each(filename_array) ip
WHERE ip.id <= p.id
) AS path
FROM tmp, json_each(filename_array) AS p
WHERE p.id > 1 -- because the filenames start with the separator
`
async function run() {
const wasmUrl = 'https://cdnjs.cloudflare.com/ajax/libs/sql.js/1.5.0/sql-wasm.wasm'
const sqljs = await window.initSqlJs({locateFile: file => wasmUrl})
const db = new sqljs.Database()
const results = db.exec(sql)
ko.applyBindings(results[0])
}
run()
<script src="https://cdnjs.cloudflare.com/ajax/libs/knockout/3.4.2/knockout-min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/sql.js/1.5.0/sql-wasm.min.js"></script>
<table>
<thead>
<tr data-bind="foreach: columns"><th data-bind="text: $data"></th></tr>
</thead>
<tbody data-bind="foreach: values">
<tr data-bind="foreach: $data"><td data-bind="text: $data"></td></tr>
</tbody>
</table>
Inspired from Lukasz Szozda's answer:
WITH RECURSIVE cte("pre","post") AS (
VALUES('C:', 'Users\fidel\Desktop\Temp' || '\')
UNION ALL
SELECT "pre" || '\' || left("post", position('\' in "post")-1),
substring("post" from position('\' in "post")+1)
FROM cte
WHERE "post" > ''
)
SELECT "pre" FROM cte
(tested on PostgreSQL)
The idea is now to replace the VALUES line
VALUES('C:', 'Users\fidel\Desktop\Temp' || '\')
with placeholders like
VALUES(?, ? || '\')
which have been pre-split in the programming language that is going to run the SQL statement above against the data base.
Reading the SQLite docs, I see that substring(... from ...) has to be replaced by substr(..., ...) and position(... in ...) is to be replaced by instr(..., ...) with parameters swapped.
Very annoying for me since I wanted SQL code that runs on both PostgreSQL and SQLite.
Simple split using json_each from JSON1:
create table demo as select 'Split,comma,separated,string,into,rows' as DemoString;
select row_number() over () as Part, parts.value as Splitted
from demo, json_each('["'||replace(demo.DemoString,',','","')||'"]') parts;

Equation in SQL query

I am trying to identify duplicate data within our database.
If somebodys first seat is 1, and their last seat is 3, it means they should have x3 num_seats
So i want to run a query where first_seat - last_seat +1 = num_seats. Any pointers on what is wrong with my query?
select acct_id, event_id, event_name, section_name, row_name, first_seat, last_seat, num_seat
from dba.v_event
where first_seat - last_seat +1 != num_seat
Your equation is backward. It should be last_seat - first_seat + 1.
In your example, first_seat = 1 and last_seat = 3.
first_seat - last_seat + 1 = 1 - 3 + 1 = -1
last_seat - first-seat + 1 = 3 - 1 + 1 = 3
If you want to allow the seats to be listed in either order, you can use ABS() to get the absolute value of the difference:
ABS(last_seat - first_seat) + 1

MDX custom consolidation: consolidate only last level descendants

I'm looking for a elegant MDX expression that sum values only of the elements of last level dimension:
I have a measure M , also I have a hierarchical parent - child dimension U that is non balanced tree:
R -> ( M = R1 + R2 = 157 )
..R1 -> ( M = R11 + R12 = 150 )
...R11 -> ( M=R111 = 100 )
.....R111 -> M=100
...R12 -> M = 50
..R2 -> M = 7
I have a set that contains some elements from this dimension:
S contains R11, R111, R12
Now I need to take, for a U.currentMember the M value (that is, the sum of last level descendants)
I have written this expression, it works but perhaps they are a more elegant way to write it:
with member measures.XX as
sum (
intersect(
[S],
Except(
descendants( [U].currentMember ),
existing( descendants( [U].currentMember ).item(0) )
)
)
,
[M]
)
select
measures.xx on columns
from [CUBE]
where [U].[R]
Note: This MDX dont run:
with member measures.XX as
sum (
intersect(
[S],
descendants( [U].currentMember )
)
,
[M]
)
select
measures.xx on columns
from [CUBE]
where [U].[R]
because return 250 insteat 150.
Right result is 150: R11 + R12 (because R111 is included in R11).
Bad result is: 250: '100' value is taked for twice R11 + R111.
Final Solution:
with member measures.XX as
sum(
intersect (
descendants([U].currentMember,,leaves),
[S]
)
,
[M]
)
select
measures.XX on 0,
descendants( [Unitats].[Unitat].[All] ) on 1
from [C]
Not sure what you want to calculate but let's assume [Member] is the member you want to evaluate :
I'd use the descendants, filter and isLeaf MDX functions :
Sum(
Filter( Descendants( [Member] ), isLeaf(Member_hierarchy.currentmember) )
,[M])
You're adding all descendants including itself that are leafs (no children).