DAX formula for calculate Sum between 2 dates - powerpivot

I have a couple of tables in PowerPivot:
A Stock table - WKRelStrength whose fields are:
Ticker, Date, StockvsMarket% (values are percentages), RS+- (values can be 0 or 1)
A Calendar Table - Cal with a Date field.
There is a many to one relationship between the tables.
I am trying to aggregate RS+-against each row for dates between 3 months ago to the date for that row - i.e a 3 month to date sum. I have tried numerous calculations but the best I can return is an circular reference error. Here is my formula:
=calculate(sum([RS+-]),DATESINPERIOD(Cal[Date],LASTDATE(Cal[Date]),-3,Month))
Here is the xlsx file.

I couldn't download the file but what you are after is what Rob Collie calls the 'Greatest Formula in the World' (GFITW). This is untested but try:
= CALCULATE (
SUM ( WKRelStrength[RS+-] ),
FILTER (
ALL ( Cal ),
Cal[Date] <= MAX ( Cal[Date] )
&& Cal[Date]
>= MAX ( Cal[Date] ) - 90
) )
Note, this will give you the previous 90 days which is approx 3 months, getting exactly the prior 3 calendar months may be possible but arguably is less optimal as you are going to be comparing slightly different lengths of time (personal choice I guess).
Also, this will behave 'strangely' if you have a total in that it will use the last date in your selection.

First of all, the formula that you are using is designed to work as a Measure. This may not work well for a Calculated Column. Secondly, it is better to do such aggregations as a Measure level, than at individual records.
Then again, I do not fully understand your situation, but if it is absolutely important for you to do this at a Record level, you may want to use the "Earlier" Function.
If you want to filter a function, based on a value in the correspontinf row, you just have to wrap your Column name with the Earlier Function. Try changing the LastDate to Earlier in your formula.

Related

SQL LAG function

I tried using the LAG function to calculate the value of previous weeks, but there are gaps in the data due to the fact that certain weeks are missing.
This is the table:
The problem is that the LAG functions takes the previous found week in the table. But I would like it to be zero if the previous week is not consecutive previous week.
This is what I would like it to be:
I'm open to any solutions.
Thank you in advance
Your example data is baffling. You have multiple rows per time frame. The first column looks like a string, which doesn't really make sense for the comparison.
So, let me answer based on a simpler data mode. The answer is to use range. If you had an integer column that specified the time frame:
ordering sales
1 10
2 20
3 30
5 50
Then you would phrase this as:
select max(sales) over (order by ordering range between 1 preceding and 1 preceding)
This would return the value from the "previous" row as defined by the first column. The value would be in a separate column, not a separate row.

MDX - Filter different measures using different date intervals

This is similar to another question I made (MDX - Running Sum over months limited to an interval) but I feel that I was going off track there.
Let me start again.
I have a calculated measure
MEMBER [Measures].[m_active] AS ([Measures].[CardCount], [Operation].[Code].[ACTIVATION])
That I want to filter on a short interval (let's say from 10 January 2016 to 20 August 2017, those are parametrized)
and another calculated measure that i want to filter since the beginning of date dimension (1st January 2010) to the end of last filter (20 August 2017 in this case), this measure is running sum of all the precedent
MEMBER [Measures].[tot_active] AS (
SUM({[Calendar.YMD].[2010].Children}.Item(0):[Calendar.YMD].CurrentMember, ([Measures].[CardCount], [Operation].[Code].[ACTIVATION]))
On the columns I have this calculated dimensions and on the rows I have months (in the small interval range) and another dimension crossjoined
SELECT
{[Measures].[m_active], [Measures].[tot_attive]} ON COLUMNS,
NonEmptyCrossJoin(
{Descendants([Calendar.YMD].[2016].[Gennaio]:[Calendar.YMD].[2017].[Agosto], [Calendar.YMD].[Month])},
{Descendants([CardStatus.Description].[All CardStatus.Descriptions], [CardStatus.Description].[Description])}
) on ROWS
If I put a date range in the WHERE clause the first member is perfect but i ruin the second, how can I make the second member ignore the WHERE clause? Or is there another solution?
Without testing I'm a little bit unsure of the behaviour, but did you try moving the filter from a WHERE clause into a subselect?
Subselects are formed like this:
...
FROM (
SELECT
<date range for filter> ON 0
FROM cubeName
)

How to joint two tables in dax using custom condition

I have Cartons table, which contains two datatime columns - entering warehouse date and exiting warehouse date. For my report i need to calculate table which shows how many cartons are in the warehouse at the end of the each day. My idea was get number of cartons for each date which have entering date lower than current date and exiting date higher than current date. So i need to translate following sql into dax:
SELECT d.date, COUNT(c.Id) AS 'Count of cartons' FROM #dim d
INNER JOIN Inventory.Cartons c on d.date between c.EnteringWarehouseTime and c.ExitingWarehouseTime
GROUP BY d.date
ORDER By d.date
Where dim is table with all dates.
But all joins in dax can be performed only using relations. I can only make cross join of these tables and filter result, but this operation would take to much time. Do i have another options for this?
Actually you can simulate a relationship with dax. However, if I understand correctly your questions and the datamodell, you want to query all cartons that are still in the warehouse at a given time, right? For each day in the Date table you can calculate that how many rows in the Carton table are by filtering it by the currently iterated Day. So this formula calculates:
For each day in the date table - VALUES('Date') -, will calculate how many rows in the Cartons table present used some filtering - COUNTROWS('Cartons') -. And the filtering works like this: On the current value of the Day - think as a foreach in C# - it will check that how many rows are in the Cartons table present where it's Exiting date is higher or equal than the current Date's value in the iteration, and Enter date is lower the the current date, or it is BLANK() - so still in the warehouse.
CALCULATETABLE(
ADDCOLUMNS(
VALUES('Date'),
"Cartons",
CALCULATE(
COUNTROWS('Cartons'),
FILTER(
'Cartons',
'Cartons'[EnteringWarehouseTime] <= 'Date'[Date]
),
FILTER(
'Cartons',
OR('Cartons'[ExitingWarehouseTime] >= 'Date'[Date],ISBLANK('Cartons'[ExitingWarehouseTime])
)
)
)
)
This is very similar to the "Open orders" pattern. Check out daxpatterns.com
If you want to simulate a relationship you can always use the COUNTROWS() > 0 pattern as a filter.
Like if you want to do a SUM(Value) on your main table, but only for those rows that are present in the Referenced table - without relationship:
CALCULATE(
SUM('MainTable'[Value]),
FILTER(
'MainTable',
CALCULATE(
COUNTROWS('ReferencedTable'),
'ReferencedTable'[PK] = 'MainTable'[FK]
) > 0
)
)

Creating a DAX pattern that counts days between a date field and a month value on a chart's x-axis

I am struggling with a DAX pattern to allow me to plot an average duration value on a chart.
Here is the problem: My dataset has a field called dtOpened which is a date value describing when something started, and I want to be able to calculate the duration in days since that date.
I then want to be able to create an average duration since that date over a time period.
It is very easy to do when thinking about the value as it is now, but I want to be able to show a chart that describes what that average value would have been over various time periods on the x-axis (month/quarter/year).
The problem that I am facing is that if I create a calculated column to find the current age (NOW() - [dtOpened]), then it always uses the NOW() function - which is no use for historic time spans. Maybe I need a Measure for this, rather than a calculated column, but I cannot work out how to do it.
I have thought about using LASTDATE (rather than NOW) to work out what the last date would be in the filter context of any single month/quarter/year, but if the current month is only half way through, then it would probably need to consider today's date as the value from which to subtract the dtOpened value.
I would appreciate any help or pointers that you can give me!
It looks like you have a table (let's call it Cases) storing your cases with one record per case with fields like the following:
casename, dtOpened, OpenClosedFlag
You should create a date table with on record per day spanning your date range. The date table will have a month ending date field identifying the last day of the month (same for quarter & year). But this will be a disconnected date table. Don't create a relationship between the Date on the Date table and your case open date.
Then use iterative averagex to average the date differences.
Average Duration (days) :=
CALCULATE (
AVERAGEX ( Cases, MAX ( DateTable[Month Ending] ) - Cases[dtopened] ),
FILTER ( Cases, Cases[OpenClosedFlag] = "Open" ),
FILTER ( Cases, Cases[dtopened] <= MAX ( DateTable[Month Ending] ) )
)
Once you plot the measure against your Month you should see the average values represented correctly. You can do something similar for quarter & year.
You're a genius, Rory; Thanks.
In my example, I had a dtClosed field rather than an Opened/Closed flag, so there was one extra piece of filtering to do to test if the Case was closed at that point in time. So my measure ended up looking like this:
Average Duration:=CALCULATE(
AVERAGEX(CasesOnly, MAX(DT[LastDateM]) - CasesOnly[Owner Opened dtOnly]),
FILTER(CasesOnly, OR(ISBLANK(CasesOnly[Owner Resolution dtOnly]),
CasesOnly[Owner Resolution dtOnly] > MAX(DT[LastDateM]))),
FILTER(CasesOnly, CasesOnly[Owner Opened dtOnly] <= MAX(DT[LastDateM]))
)
And to get the chart, I plotted the DT[Date] field on the x-axis.
Thanks very much again.

Date range intersection in SQL

I have a table where each row has a start and stop date-time. These can be arbitrarily short or long spans.
I want to query the sum duration of the intersection of all rows with two start and stop date-times.
How can you do this in MySQL?
Or do you have to select the rows that intersect the query start and stop times, then calculate the actual overlap of each row and sum it client-side?
To give an example, using milliseconds to make it clearer:
Some rows:
ROW START STOP
1 1010 1240
2 950 1040
3 1120 1121
And we want to know the sum time that these rows were between 1030 and 1100.
Lets compute the overlap of each row:
ROW INTERSECTION
1 70
2 10
3 0
So the sum in this example is 80.
If your example should have said 70 in the first row then
assuming #range_start and #range_end as your condition paramters:
SELECT SUM( LEAST(#range_end, stop) - GREATEST(#range_start, start) )
FROM Table
WHERE #range_start < stop AND #range_end > start
using the greatest/least and date functions you should be able to get what you need directly operating on the date type.
I fear you're out of luck.
Since you don't know the number of rows that you will be "cumulatively intersecting", you need either a recursive solution, or an aggregation operator.
The aggregation operator you need is no option because SQL does not have the data type that it is supposed to operate on (that type being an interval type, as described in "Temporal Data and the Relational Model").
The recursive solution may be possible, but it is likely to be difficult to write, difficult to read to other programmers, and it is also questionable whether the optimizer can turn that query into the optimal data access strategy.
Or I misunderstood your question.
There's a fairly interesting solution if you know the maximum time you'll ever have. Create a table with all the numbers in it from one to your maximum time.
millisecond
-----------
1
2
3
...
1240
Call it time_dimension (this technique is often used in dimensional modelling in data warehousing.)
Then this:
SELECT
COUNT(*)
FROM
your_data
INNER JOIN time_dimension ON time_dimension.millisecond BETWEEN your_data.start AND your_data.stop
WHERE
time_dimension.millisecond BETWEEN 1030 AND 1100
...will give you the total number of milliseconds of running time between 1030 and 1100.
Of course, whether you can use this technique depends on whether you can safely predict the maximum number of milliseconds that will ever be in your data.
This is often used in data warehousing, as I said; it fits well with some kinds of problems -- for example, I've used it for insurance systems, where a total number of days between two dates was needed, and where the overall date range of the data was easy to estimate (from the earliest customer date of birth to a date a couple of years into the future, beyond the end date of any policies that were being sold.)
Might not work for you, but I figured it was worth sharing as an interesting technique!
After you added the example, it is clear that indeed I misunderstood your question.
You are not "cumulatively intersecting rows".
The steps that will bring you to a solution are :
intersect each row's start and end point with the given start and end points. This should be doable using CASE expressions or something of that nature, something in the style of :
SELECT (CASE startdate < givenstartdate : givenstartdate, CASE startdate >= givenstartdate : startdate) as retainedstartdate, (likewise for enddate) as retainedenddate FROM ... Cater for nulls and that sort of stuff as needed.
With the retainedstartdate and retainedenddate, use a date function to compute the length of the retained interval (which is the overlap of your row with the given time section).
SELECT the SUM() of those.