I have Cartons table, which contains two datatime columns - entering warehouse date and exiting warehouse date. For my report i need to calculate table which shows how many cartons are in the warehouse at the end of the each day. My idea was get number of cartons for each date which have entering date lower than current date and exiting date higher than current date. So i need to translate following sql into dax:
SELECT d.date, COUNT(c.Id) AS 'Count of cartons' FROM #dim d
INNER JOIN Inventory.Cartons c on d.date between c.EnteringWarehouseTime and c.ExitingWarehouseTime
GROUP BY d.date
ORDER By d.date
Where dim is table with all dates.
But all joins in dax can be performed only using relations. I can only make cross join of these tables and filter result, but this operation would take to much time. Do i have another options for this?
Actually you can simulate a relationship with dax. However, if I understand correctly your questions and the datamodell, you want to query all cartons that are still in the warehouse at a given time, right? For each day in the Date table you can calculate that how many rows in the Carton table are by filtering it by the currently iterated Day. So this formula calculates:
For each day in the date table - VALUES('Date') -, will calculate how many rows in the Cartons table present used some filtering - COUNTROWS('Cartons') -. And the filtering works like this: On the current value of the Day - think as a foreach in C# - it will check that how many rows are in the Cartons table present where it's Exiting date is higher or equal than the current Date's value in the iteration, and Enter date is lower the the current date, or it is BLANK() - so still in the warehouse.
CALCULATETABLE(
ADDCOLUMNS(
VALUES('Date'),
"Cartons",
CALCULATE(
COUNTROWS('Cartons'),
FILTER(
'Cartons',
'Cartons'[EnteringWarehouseTime] <= 'Date'[Date]
),
FILTER(
'Cartons',
OR('Cartons'[ExitingWarehouseTime] >= 'Date'[Date],ISBLANK('Cartons'[ExitingWarehouseTime])
)
)
)
)
This is very similar to the "Open orders" pattern. Check out daxpatterns.com
If you want to simulate a relationship you can always use the COUNTROWS() > 0 pattern as a filter.
Like if you want to do a SUM(Value) on your main table, but only for those rows that are present in the Referenced table - without relationship:
CALCULATE(
SUM('MainTable'[Value]),
FILTER(
'MainTable',
CALCULATE(
COUNTROWS('ReferencedTable'),
'ReferencedTable'[PK] = 'MainTable'[FK]
) > 0
)
)
Related
I made really simple example table with columns date and credit, so we can sum all credit to get account saldo of the account. I can sum all credit values to get saldo, but that is not what I want. I want to calculate average saldo, so in order to do that I need to use RangeDate table with date of every day and query with this logic:
SELECT DRA.date, SUM(ACB.credit)
FROM AccountBalance ACB
JOIN DateRange DRA ON ACB.date <= DRA.date
GROUP BY DRA.date
http://sqlfiddle.com/#!18/88afa/10
the problem is when using this program on a larger range of dates, like whole year for example.
Instruction is telling SQL Engine to sum up all rows of credit before the current date including credit of the current row date (where the cursor is in that moment (JOIN... ACB.date <= DRA.date)), in order to get accounts credit for that day.
This is inefficient and slow for big tables because that sum already exists in the row 1 level before, and I would like to tell SQL Engine to take that sum and only add the one row of credit that it is in.
Somone told me that I should use LAG function, but i need an example first.
I think you simply need an analyitc function -
SELECT DRA.date,
SUM(ACB.saldo) OVER (ORDER BY DRA.date)
FROM DateRange DRA
LEFT JOIN AccountBalance ACB ON ACB.date = DRA.date;
Demo.
I have 3 tables, the table called agg((date,sname,open,high,low,close,volume)) contains daily information for every stock for x number of past years. Another table, split(date,sname,post,pre), has info for every time any stock split. Another table, div(date,sname,dividend), has info for every time a stock had a dividend. I want to create a new table, with a column that gives the percent change from closing of the previous day, to the day after, for every stock and every day listed in agg.
Here is the line I have for just the daily change, not including div and split:
create table daily
as
with prevclose as (
select date,sname,close,
lag(close) over (partition by symbol order by date) pclose
from agg
)
select a.*,
100.0*(close - pclose)/(case when pclose=0 then null else pclose end) as prcnt
from prevclose a
where pclose != 0;
I want to change this code to incorporate the change in split and dividends which is not incorporated in the agg table. I don't even need the full calculation for this, but I need help figuring out how to incorporate the condition into the new table. I only need to add in split and div info if there is split and div info for that particular date and time. I think if I could just see the query for a similar problem it would help.
while I was setting up my visual page on PowerBI, I came across a weird issue to address with. I was trying to calculate an average of some values which came togheter with their dates (date) in tableA.
More precisely, tableA has a date field and a numeric feature (feature) and there might be more values for the same date. In addition, the data field points to another data field in a common calendar table (calendarTable). I would like to calculate an average of 'feature' (let's say, the daily average).
To achieve this, I've tried to calculate a new measure as stated here:
Average = CALCULATE(
AVERAGE('tableA'[feature]),
USERELATIONSHIP('tableA'[date], 'calendarTable'[date]),
GROUPBY(date, 'calendarTable'[date])
)
What I got is a 'cumulative' average instead of a daily average. In other terms, for each date the set of values to be averaged increases, including the previous values.
I've also tried to perform the calculation in SQL with success (in DAX there is no need to refer to tableB as I used a calculated column):
SELECT
CAST(a.Date AS Date) AS Dates,
AVG(DATEDIFF(MINUTE, b.Date, a.Date)) AS AVG_DURATION
FROM
tableB AS b
INNER JOIN
tableA AS a
ON
a.ID = b.ID
GROUP BY
CAST(a.Date AS Date)
ORDER BY
Dates ASC;
Does anyone have an idea on how to get in DAX the same result as in SQL? I've already tried to apply some filters on dates but with no luck.
Thanks.
Although there are some confusion in your question, this below instruction would help you achieving your requirement.
If I understand correct, you are looking for a daily AVEGARE of your column "Feature". You can do this with a new Custom Table, with GROUP BY functionality using DAX. Click on new table and use this below code and check you get your expected output or not-
group_by_date =
GROUPBY (
your_table_name,
your_table_name[date_column_name].[Date],
"Average Feature", AVERAGEX(CURRENTGROUP(), your_table_name[feature])
)
Now, if you looking for DAX to calculate the Same (but redundant) result in each row, you can use this below code-
date_wise_average =
VAR current_row_date = MIN(your_table_name[date_column_name].[Date])
RETURN
CALCULATE(
AVERAGE(your_table_name[feature]),
FILTER(
ALL(your_table_name),
your_table_name[date_column_name].[Date] = current_row_date
)
)
The link article explains and answers my requirement in R. How do I achieve this in PostgreSQL?
https://blog.exploratory.io/populating-missing-dates-with-complete-and-fill-functions-in-r-and-exploratory-79f2a321e6b5
In summary:
Basically in the table shown, I want to auto create new records of missing dates (range from 10-02 to 10-14).
The problem of this table is, the data of discount rate and product is still missing. The SQL statement should be able to detect, when there is first item in range (for example in discount rate column), the subsequent empty records in the discount rate will be populated with the same value of first item until there is another item found, then repeat the process.
For example, discount rate from 10-02 to 10-14 should be 0.1 (based on previous value on 10-01) and from 10-16 onward should be 0.2 (based on previous value on 10-15). How do I achieve this in SQL if it involves hundreds and thousands of records?
You can left join to a list of dates generated through generate_series()
select ...
from generate_series(date '2020-10-01', date '2020-10-14', interval '1 day) as g(dt)
left join your_table t on t."date" = g.dt::date
where ...
I have a couple of tables in PowerPivot:
A Stock table - WKRelStrength whose fields are:
Ticker, Date, StockvsMarket% (values are percentages), RS+- (values can be 0 or 1)
A Calendar Table - Cal with a Date field.
There is a many to one relationship between the tables.
I am trying to aggregate RS+-against each row for dates between 3 months ago to the date for that row - i.e a 3 month to date sum. I have tried numerous calculations but the best I can return is an circular reference error. Here is my formula:
=calculate(sum([RS+-]),DATESINPERIOD(Cal[Date],LASTDATE(Cal[Date]),-3,Month))
Here is the xlsx file.
I couldn't download the file but what you are after is what Rob Collie calls the 'Greatest Formula in the World' (GFITW). This is untested but try:
= CALCULATE (
SUM ( WKRelStrength[RS+-] ),
FILTER (
ALL ( Cal ),
Cal[Date] <= MAX ( Cal[Date] )
&& Cal[Date]
>= MAX ( Cal[Date] ) - 90
) )
Note, this will give you the previous 90 days which is approx 3 months, getting exactly the prior 3 calendar months may be possible but arguably is less optimal as you are going to be comparing slightly different lengths of time (personal choice I guess).
Also, this will behave 'strangely' if you have a total in that it will use the last date in your selection.
First of all, the formula that you are using is designed to work as a Measure. This may not work well for a Calculated Column. Secondly, it is better to do such aggregations as a Measure level, than at individual records.
Then again, I do not fully understand your situation, but if it is absolutely important for you to do this at a Record level, you may want to use the "Earlier" Function.
If you want to filter a function, based on a value in the correspontinf row, you just have to wrap your Column name with the Earlier Function. Try changing the LastDate to Earlier in your formula.