SQL recursive join with window function - sql

I am currently trying to run a simulation in SQL, the query is to monitor cost against a pre-defined target.
The amount being monitored is a 91 day moving average cost so this uses a correlated sub query to get the correct amount, but on days when the performance target is breached, the amount is adjusted to the target. (if moving average is 110, the monitor for this day from next day forward will take 100 instead of 110 as the payment would have been reduced on this day.)
Example, want to measure on what days costs breach a target of say 100, the costs are monitored on a 91 day moving average. When the target is breached there is a reduction in payment and the amount included in the monitor is adjusted to the target. I have been able to implement this but now I need to adjust for 91 days moving instead of the beginning of the time period. This would be possible if I could have 2 recursive queries but I don't believe this is possible. I have also tried various joining methods but they all become circular.
Any suggested approach would be greatly appreciated.

Related

SQL Method for Cascading Workload Based on Rank and Available Hours

Recently I created an automated production scheduling tool through Excel that assigns a rank to items being produced in the same process, and then uses that rank in combination with the workload to create a schedule.
It functions exactly the way it is intended to, but due to the large amount of data and it being excel it has very slow performance, which is why I am looking to move the calculations over to SQL.
The general logic is like this:
-Always produce everything from the first day before the second day
-Always produce items from an earlier rank before items from a later rank
You can see how this plays out in the image below, where the line has 21.5 hours today, so items will be produced on day 1 until it equals 21.5, where the remainder is then carried over to day 2 and so on.
I was able to do this in excel using lengthy positional based formulas, but I am trying to think of a way to get the same result in SQL without having to rely on looking at the row above.
I am not sure how to convey something like 'Subtract from the available time production time of higher priority items produced on the same day'.
I apologize if the question is unclear, but any advice would be appreciated.
Image of Production Hours Cascading by Priority and Day
Example of Position-Based Fomula
Thanks to shawnt00, that put me in the right direction. Ultimately I had to modify the case statements a bit to go off of the cumulative total instead, but I was able to get the desired results using a sum() Over (partition by order by ) statement.

Dynamically filtering large query result for presentation in SSRS

We have a system that records data to an SQL Server DB captured from field equipment every minute. This data is used for a number of purposes, one of which is for charting in reports via SSRS.
The issue is that with such a high volume of data, when a report is run for period of for example 3 months, the volume of data returned obviously causes excessive report rendering times.
I've been thinking of finding a way of dynamically reducing the amount of data returned, based on the start and end time periods chosen. Something along the lines of a sliding scale where from the duration between the start and end period, I can apply different levels of filtering so that where larger periods are chosen, more filtering occurs while for smaller periods less or no filtering occurs.
There is still a need to be able to produce higher resolution (as in more data points returned) reports for troubleshooting purposes.
For example:
Scenario 1:
User is executing a report for a period of 3 months. Result set returned by the query is reduced for performance reasons without adversely affecting what information the user wants to see (the chart is still representative of the changes over time).
Scenario 2:
User executes the report for a period of 1 hour, in order to look for potential indicator(s) of problems with field devices while troubleshooting the system. For this short time period, no filtering is applied.
My first thought was to use a modulo operation on the primary key of the data (which is an identity field), whereby the divisor is chosen depending on the difference between the start and end dates.
For example, something like if the difference between the start and end dates for the report execution period is 5 weeks, choose a divisor of 5 and apply a mod to the PK, selecting where the result is equal to zero.
I would love to get feedback as to whether this sounds like a valid approach or whether there is a better way to do this.
Thanks.

Speed up Running Total MDX calculated measure?

I'm using the follow mdx to keep a running total of the Period Balance measure in my cube:
SUM({[Due Date].[Date].CurrentMember.Level.Item(0):[Due Date].[Date].CurrentMember}, [Measures].[Period Balance])
It works great, however it's really slow as the amount of data displayed increases. I can't use a MTD or YTD because the users may be analyzing data that overlaps years. Any way I can speed this up?
Thanks in advance.
I take it you've seen this? http://sqlblog.com/blogs/mosha/archive/2006/11/17/performance-of-running-sum-calculations-in-sp2.aspx
Failing that, there is another sample which uses the technique of taking the parent's prior totals and the parent's current child from first sibling to current - So you'd sum the prior months and then this month's days - That'll only work if you have a date hierarchy though:
http://www.ssas-info.com/analysis-services-articles/62-design/367-inventory-management-calculations-in-sql-server-analysis-services-2005-by-richard-tkachuk
I think the pictures there explain it better, its the "Summing Increments" section.
Are you query-logging and doing usage-based aggregations?

Calculated Member for Cumulative Sum

First some background: I have the typical Date dimension (similar to the one in the Adventure Works cube) and an Account dimension. In my fact table I have daily transaction amounts for the accounts.
I need to calculate cumulative transaction amounts for different accounts for different periods of time. The catch is that whatever is the first period shown on the resulting report should get its transaction amount as-is from the fact table and all the following periods in the report should have cumulative amounts.
For example, I might have a single account on rows and on columns I could have [Date].[Calendar].[Calendar Year].[&2005]:[Date].[Calendar].[Calendar Year].[&2010]. The transaction amount for 2005 should have the sum of transaction amounts that took place in 2005 for that specific account. For the following year, 2006, the transaction amount should be TransactionAmountsIn2005 + TransactionAmountsIn2006. Same goes for the remaining of the years.
My problem is that I don't really know how to specify this kind of calculated member in the cube because the end-user who is responsible for writing the actual MDX queries that produce the reports could use any range of periods on any hierarchy level of the Date dimension.
Hope this made some sense.
Teeri,
I would avoid letting the end-user actually write MDX queries and just force them to use ranges you defined. To clarify, just give them a start and end date, or a range if you will, to select and then go from there. I've worked with accounting and finance developing cubes (General Ledger, etc) for years and this is usually what they were ultimately looking for.
Good luck!

track sales for week/month and find the best sellers

Lets say I have a website that sells widgets. I would like to do something similar to a tag cloud tracking best sellers. However, due to constantly aquiring and selling new widgets, I would like the sales to decay on a weekly time scale.
I'm having problems puzzling out how store and manipulate this data and have it decay properly over time so that something that was an ultra hot item 2 months ago but has since tapered off doesn't show on top of the list over the current best sellers. What would be the logic and database design for this?
Part 1: You have to have tables storing the data that you want to report on. Date/time sold is obviously key. If you need to work in decay factors, that raises the question: for how long is the data good and/or relevant? At what point in time as the "value" of the data decayed so much that you no longer care about it? When this point is reached for any given entry in the database, what do you do--keep it there but ensure it gets factored out of all subsequent computations? Or do you archive it--copy it to a "history" table and delete it from your main "sales" table? This is relevant, as it has to be factored into your decay formula (as well as your capacity planning, annual reporting requirements, and who knows what all else.)
Part 2: How much thought has been given to the decay formula that you want to use? There's no end of detail you can work into this. Options and factors to wade through include but are not limited to:
Simple age-based. Everything before the cutoff date counts as 1; everything after counts as 0. Sum and you're done.
What's the cutoff date? Precisly 14 days ago, to the minute? Midnight as of two Saturdays ago from (now)?
Does the cutoff date depend on the item that was sold? If some items are hot but some are not, does that affect things? What if you want to emphasize some things (the expensive/hard to sell ones) over others (the fluff you'd sell anyway)?
Simple age-based decays are trivial, but can be insufficient. Time to go nuclear.
Perhaps you want some kind of half-life, Dr. Freeman?
Everything sold is "worth" X, where the value of X is either always the same or varies on the item sold. And the value of X can decay over time.
Perhaps the value of X decreased by one-half every week. Or ever day. Or every month. Or (again) it may vary depending on the item.
If you do half-lifes, the value of X may never reach zero, and you're stuck tracking it forever (which is why I wrote "part 1" first). At some point, you probably need some kind of cut-off, some point after which you just don't care. X has decreased to one-tenth the intial value? Three months have passed? Either/or but the "range" depends on the inherent valud of the item?
My real point here is that how you calculate your decay rate is far more important than how you store it in the database. So long as the data's there that the formalu needs to do it's calculations, you should be good. And if you only need the last month's data to do this, you should perhaps move everything older to some kind of archive table.
you could just count the sales for the last month/week/whatever, and sort your items according to that.
if you want you can always add the total amonut of sold items into your formula.
You might have a table which contains the definitions of the pointing criterion (most sales, most this, most that, etc.), then for a given period, store in another table the attribution of points for each of the criterion defined in the criterion table. Obviously, a historical table will be used to store the score for each sellers for a given period or promotion, call it whatever you want.
Does it help a little?