I'm pretty new to SSAS and MDX. I have a measure which is currently an average over time, for example # of employees stored by department. I understand this is considered semi-additive. Is there a way to keep this semi-additive, for example, sum them up by department and then take the MAX over time instead of the average?
Any ideas are appreciated.
Thanks,
AM
I found a useful link which helped me solve my problem.
http://geekswithblogs.net/darrengosbell/archive/2007/10/25/Max-as-a-semi-additive-aggregation-over-time.aspx
Related
I have a problem which looked like a simple requirement ... it turned out it wasn't ... at least for me.
At the moment I feel like I've read half of the MDX internet ...
I'm using latest Saiku CE (Mondrian 4), and my simplified cube looks like this:
Dimensions:
Machine.Manufacturer
Measures:
Measure.[Msg count]
Measure.[Distinct machines]
Measure.[Distinct days]
Calculated measures:
Measure.[Msg xMxD] which is basically: (Measure.[Msg count] / Measure.[Distinct machines] / Measure.[Distinct days]).
Measure.[Msg xMxd %] which is: (Measure.[Msg xMxD] / SUM(Measure.[Msg xMxD], Machine.[Manufacturer].[All Manufacturers]))
What I want to accomplish is this table:
But as you've probably guessed, I have a problem with the Measure.[Msg xMxd %] measure ...
Because it is calculated on base of another calculated measure, the % calculation is done after the summing for a particular Manufacturer and I don't know how to overcome this.
The closest answer I found was this one: https://forums.pentaho.com/threads/160265-Calculate-members-in-mdx/
... but this concerns only one generated member as a sum of all manufacturers.
I've found also some resolutions based on Axis(...) functions but those are unavailable in Mondrian.
Do you have any ideas ? Is there a possibility to generate a set of calculated members ? this would (at least theoretically) give me a possible to set solve order for all child members of [Machine].[Manufacturer]
Any help is much appreciated.
I'm looking to find out how to write a SQL query that looks for store locations that contribute towards 80% of inventory adjustments and what their inventory accuracy calculation is. I'm not quite sure how to go about doing it. So far I have the total absolute value of their adjustments which will be used to base the calculation off of. Here's what I have so far.Any help would be appreciated.
SELECT sum(abs(Details.ValueDifference)) As writeoff, (sum(Details.NumberofPartsCounted) - sum(Details.NumberofPartsCountedwithErrors))/(sum(Details.NumberofPartsCounted)) As Accuracy
FROM Details;
Alright, well, I feel like I'm not 100% confident in terms of what you're looking for, but here's my best guess. It looks like you want to select location number along with what you're referring to as "writeoff" and "Accuracy" in your query. You'll have to group by the location number in order to get the sums for each location.
It sounds like the writeoff results column is supposed to help tell you how much a particular location has contributed towards total inventory adjustments? Assuming that's true, I would also order by writeoff in descending order so you can see the rows with the highest values for this first.
SELECT N_LOCATION
,sum(abs(ValueDifference)) As writeoff
,(sum(NumberofPartsCounted) - sum(Details.NumberofPartsCountedwithErrors))/(sum(NumberofPartsCounted)) As Accuracy
FROM Details
GROUP BY
N_LOCATION
ORDER BY
writeoff DESC;
Honestly, I think the best way to accomplish what I think you're trying to do would be with a stored procedure. I would use the query above and also a query to select the sum of the absolute value of the value difference for all locations. Then I would save the total for all locations in a variable to be used while looping through the results of the first query. That way, you can use the "writeoff" for the location with the total for all locations to find the percentage for each location.
I'm guessing you would then add locations to a result set until the locations that you've added have percentages totaling 80%? Not sure exactly how you're setting the rule for that 80%, but hopefully you can adjust as needed.
I'm having trouble getting the results I would like from the query I've built. The overall goal I'm trying to accomplish is to get the first odometer reading of the month and the last odometer reading of the month for a specific vehicle. I would then like to subtract the two to get total miles driven for that month. I figured a derived table with window functions would best help to accomplish this goal (see example SQL below).
SELECT
VEHICLE_ID2_FW
FROM
(SELECT
VEHICLE_ID2_FW,
LOCATION_CODE_FW,
MIN(ODOMETER_FW) OVER(PARTITION BY YEAR(DATE_FW), MONTH(DATE_FW)) AS MIN_ODO,
MAX(ODOMETER_FW) OVER(PARTITION BY YEAR(DATE_FW), MONTH(DATE_FW)) AS MAX_ODO
FROM
GPS_TRIPS_FW) AS G
I keep running into an issue where the derived table's query, by itself, runs and
works. However, when I bracket it in the FROM clause it shoots back an the error
The multi-part identifier could not be bound
Hoping that I could get some help figuring this out and maybe finding an overall better way to accomplish my goal. Thank you!
Odometers only increase (well, that should be true). So just use aggregation:
select VEHICLE_ID2_FW, year(date_fw), month(date_fw),
min(ODOMETER_FW), max(ODOMETER_FW),
max(ODOMETER_FW) - min(ODOMETER_FW) as miles_driven_in_month
from GPS_TRIPS_FW
group by VEHICLE_ID2_FW, year(date_fw), month(date_fw);
This answers the question that you asked. I don't think it solves your problem, though, because the total miles driven per month will not add up to the total miles driven. The issue are the miles driven between the last record at the end of the month and the first at the beginning of the next month.
If this is an issue, ask another question. Provide sample data, desired results, and an appropriate database tag.
here is my case. I have small training in creating OLAP cube in SSAS and as part of it I need to calculate median time from creation issue to resolve issue.
So according to microsoft docs I should use MEDIAN function in MDX. So here is my code:
MEDIAN([Issue].[Issue ID],[Measures].[Hours Resolved])
Short explanation: [Measures].[Hours Resolved] it's a measure calculated in database from dimensions "resolved issue time" - "creation issue time" with DATEDIFF function. Both are smalldatetime datatype.
And it looks like it works in proper way for case on the screen below.
Exept "Grand Total" value in Mediana column.
I believe that Grand Total value should be 12 becasue this is proper score according to way the median should be calculated (checked also in Excel). So am I wrong here and this is proper behaviour? Or maybe I miss something in my calculation or configuration in SSAS?
Second case in this exercise.
When I will add for example Group Name column like on the picture below:
In my understanding value mediana column for let's say CRM part should be 9.
Can you please guide me if I'm right or wrong? If I'm right how to achieve this. Or if I'm wrong please point mistake in my solution. This is my 1st time when I'm calculating median.
It's a little embrassing that no one even look at it - 16 views and probably all mine - well doesn't matter anymore because I figured it out myself.
For proper median calculation for all dimensions I should use Median function and Scope function in MDX. So here is the code if someone will face the same problem in future:
CREATE MEMBER CURRENTCUBE.[Measures].[Median]
AS Null
VISIBLE = 1;
SCOPE([Measures].[Median]);
THIS = MEDIAN([View Issue Median].[Issue ID].[Issue ID], [Measures].[Hours Resolved]);
END SCOPE;
I have a table which holds a list of transactions.
Task: To estimate the next transaction amount.
Problem:
The actual payment periods for each rows is a varible, which can be weekly, monthly or anything choosen by the end user.
To estimate the next payment, based on previous data, can anyone suggest a good method?
At the moment I basically take the figure back to the daily amount then multiple by period i.e. week/month/q/year. Then given the history, choose the result that has the highest incidence (count).
This does not generate an accuarate estimations due to payments within payments that I dont need to care about i.e. £100 real payment but +20 for addition charges that are irrelevant.
Another way is to calculate the average,std,varience between payments then choose the highest probability.
Problem is, i've been unable to code this in SQL.
SELECT [Identifier]
,[DateTranEntered]
,[Type]
,[TranDateFrom],
,[TranDateTo]
,[Amount]
,[ReferenceForTran]
,[CreatedDate]
FROM .[TranTable]
Perhaps something with recursion through the table and calculate every transaction daily amount then with the variance, incidence - choose from the last 'x' what the estimate guess is ?
Problem is I have gotten stuck with the resurive query for this.
Any thoughts about this?
SQL Server Analysis services has a suite of data mining tools that provide algorithms such as Linear Regressions, Decision Trees and Neural Networks. You can learn more about them here: http://msdn.microsoft.com/en-us/library/ms175595.aspx. It sounds like Linear Regressions might be the best place to start for this problem.